953 resultados para Geometric Distributions
Resumo:
As polycyclic aromatic hydrocarbons (PAHs) have a negative impact on human health due to their mutagenic and/or carcinogenic properties, the objective of this work was to study the influence of tobacco smoke on levels and phase distribution of PAHs and to evaluate the associated health risks. The air samples were collected at two homes; 18 PAHs (the 16 PAHs considered by U.S. EPA as priority pollutants, dibenzo[a,l]pyrene and benzo[j]fluoranthene) were determined in gas phase and associated with thoracic (PM10) and respirable (PM2.5) particles. At home influenced by tobacco smoke the total concentrations of 18 PAHs in air ranged from 28.3 to 106 ngm 3 (mean of 66.7 25.4 ngm 3),∑PAHs being 95% higher than at the non-smoking one where the values ranged from 17.9 to 62.0 ngm 3 (mean of 34.5 16.5 ngm 3). On average 74% and 78% of ∑PAHs were present in gas phase at the smoking and non-smoking homes, respectively, demonstrating that adequate assessment of PAHs in air requires evaluation of PAHs in both gas and particulate phases. When influenced by tobacco smoke the health risks values were 3.5e3.6 times higher due to the exposure of PM10. The values of lifetime lung cancer risks were 4.1 10 3 and 1.7 10 3 for the smoking and nonsmoking homes, considerably exceeding the health-based guideline level at both homes also due to the contribution of outdoor traffic emissions. The results showed that evaluation of benzo[a]pyrene alone would probably underestimate the carcinogenic potential of the studied PAH mixtures; in total ten carcinogenic PAHs represented 36% and 32% of the gaseous ∑PAHs and in particulate phase they accounted for 75% and 71% of ∑PAHs at the smoking and non-smoking homes, respectively.
Resumo:
This paper analyses earthquake data in the perspective of dynamical systems and fractional calculus (FC). This new standpoint uses Multidimensional Scaling (MDS) as a powerful clustering and visualization tool. FC extends the concepts of integrals and derivatives to non-integer and complex orders. MDS is a technique that produces spatial or geometric representations of complex objects, such that those objects that are perceived to be similar in some sense are placed on the MDS maps forming clusters. In this study, over three million seismic occurrences, covering the period from January 1, 1904 up to March 14, 2012 are analysed. The events are characterized by their magnitude and spatiotemporal distributions and are divided into fifty groups, according to the Flinn–Engdahl (F–E) seismic regions of Earth. Several correlation indices are proposed to quantify the similarities among regions. MDS maps are proven as an intuitive and useful visual representation of the complex relationships that are present among seismic events, which may not be perceived on traditional geographic maps. Therefore, MDS constitutes a valid alternative to classic visualization tools for understanding the global behaviour of earthquakes.
Resumo:
We introduce the notions of equilibrium distribution and time of convergence in discrete non-autonomous graphs. Under some conditions we give an estimate to the convergence time to the equilibrium distribution using the second largest eigenvalue of some matrices associated with the system.
Resumo:
Dissertação apresentada na Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para a obtenção do grau de Mestre em Engenharia Biomédica. A presente dissertação foi desenvolvida no Erasmus Medical Center em Roterdão, Holanda
Resumo:
Earthquakes are associated with negative events, such as large number of casualties, destruction of buildings and infrastructures, or emergence of tsunamis. In this paper, we apply the Multidimensional Scaling (MDS) analysis to earthquake data. MDS is a set of techniques that produce spatial or geometric representations of complex objects, such that, objects perceived to be similar/distinct in some sense are placed nearby/distant on the MDS maps. The interpretation of the charts is based on the resulting clusters since MDS produces a different locus for each similarity measure. In this study, over three million seismic occurrences, covering the period from January 1, 1904 up to March 14, 2012 are analyzed. The events, characterized by their magnitude and spatiotemporal distributions, are divided into groups, either according to the Flinn–Engdahl seismic regions of Earth or using a rectangular grid based in latitude and longitude coordinates. Space-time and Space-frequency correlation indices are proposed to quantify the similarities among events. MDS has the advantage of avoiding sensitivity to the non-uniform spatial distribution of seismic data, resulting from poorly instrumented areas, and is well suited for accessing dynamics of complex systems. MDS maps are proven as an intuitive and useful visual representation of the complex relationships that are present among seismic events, which may not be perceived on traditional geographic maps. Therefore, MDS constitutes a valid alternative to classic visualization tools, for understanding the global behavior of earthquakes.
Resumo:
The associated production of a Higgs boson and a top-quark pair, t (t) over barH, in proton-proton collisions is addressed in this paper for a center of mass energy of 13 TeV at the LHC. Dileptonic final states of t (t) over barH events with two oppositely charged leptons and four jets from the decays t -> bW(+) -> bl(+)v(l), (t) over bar -> (b) over barW(-) -> (b) over barl(-)(v) over bar (l) and h -> b (b) over bar are used. Signal events, generated with MadGraph5_aMC@NLO, are fully reconstructed by applying a kinematic fit. New angular distributions of the decay products as well as angular asymmetries are explored in order to improve discrimination of t (t) over barH signal events over the dominant irreducible background contribution, t (t) over barb (b) over bar. Even after the full kinematic fit reconstruction of the events, the proposed angular distributions and asymmetries are still quite different in the t (t) over barH signal and the dominant background (t (t) over barb (b) over bar).
Resumo:
The single-lap joint is the most commonly used, although it endures significant bending due to the non-collinear load path, which negatively affects its load bearing capabilities. The use of material or geometric changes is widely documented in the literature to reduce this handicap, acting by reduction of peel and shear peak stresses or alterations of the failure mechanism emerging from local modifications. In this work, the effect of using different thickness adherends on the tensile strength of single-lap joints, bonded with a ductile and brittle adhesive, was numerically and experimentally evaluated. The joints were tested under tension for different combinations of adherend thickness. The effect of the adherends thickness mismatch on the stress distributions was also investigated by Finite Elements (FE), which explained the experimental results and the strength prediction of the joints. The numerical study was made by FE and Cohesive Zone Modelling (CZM), which allowed characterizing the entire fracture process. For this purpose, a FE analysis was performed in ABAQUS® considering geometric non-linearities. In the end, a detailed comparative evaluation of unbalanced joints, commonly used in engineering applications, is presented to give an understanding on how modifications in the bonded structures thickness can influence the joint performance.
Resumo:
Hyperspectral remote sensing exploits the electromagnetic scattering patterns of the different materials at specific wavelengths [2, 3]. Hyperspectral sensors have been developed to sample the scattered portion of the electromagnetic spectrum extending from the visible region through the near-infrared and mid-infrared, in hundreds of narrow contiguous bands [4, 5]. The number and variety of potential civilian and military applications of hyperspectral remote sensing is enormous [6, 7]. Very often, the resolution cell corresponding to a single pixel in an image contains several substances (endmembers) [4]. In this situation, the scattered energy is a mixing of the endmember spectra. A challenging task underlying many hyperspectral imagery applications is then decomposing a mixed pixel into a collection of reflectance spectra, called endmember signatures, and the corresponding abundance fractions [8–10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. Linear mixing model holds approximately when the mixing scale is macroscopic [13] and there is negligible interaction among distinct endmembers [3, 14]. If, however, the mixing scale is microscopic (or intimate mixtures) [15, 16] and the incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [17], the linear model is no longer accurate. Linear spectral unmixing has been intensively researched in the last years [9, 10, 12, 18–21]. It considers that a mixed pixel is a linear combination of endmember signatures weighted by the correspondent abundance fractions. Under this model, and assuming that the number of substances and their reflectance spectra are known, hyperspectral unmixing is a linear problem for which many solutions have been proposed (e.g., maximum likelihood estimation [8], spectral signature matching [22], spectral angle mapper [23], subspace projection methods [24,25], and constrained least squares [26]). In most cases, the number of substances and their reflectances are not known and, then, hyperspectral unmixing falls into the class of blind source separation problems [27]. Independent component analysis (ICA) has recently been proposed as a tool to blindly unmix hyperspectral data [28–31]. ICA is based on the assumption of mutually independent sources (abundance fractions), which is not the case of hyperspectral data, since the sum of abundance fractions is constant, implying statistical dependence among them. This dependence compromises ICA applicability to hyperspectral images as shown in Refs. [21, 32]. In fact, ICA finds the endmember signatures by multiplying the spectral vectors with an unmixing matrix, which minimizes the mutual information among sources. If sources are independent, ICA provides the correct unmixing, since the minimum of the mutual information is obtained only when sources are independent. This is no longer true for dependent abundance fractions. Nevertheless, some endmembers may be approximately unmixed. These aspects are addressed in Ref. [33]. Under the linear mixing model, the observations from a scene are in a simplex whose vertices correspond to the endmembers. Several approaches [34–36] have exploited this geometric feature of hyperspectral mixtures [35]. Minimum volume transform (MVT) algorithm [36] determines the simplex of minimum volume containing the data. The method presented in Ref. [37] is also of MVT type but, by introducing the notion of bundles, it takes into account the endmember variability usually present in hyperspectral mixtures. The MVT type approaches are complex from the computational point of view. Usually, these algorithms find in the first place the convex hull defined by the observed data and then fit a minimum volume simplex to it. For example, the gift wrapping algorithm [38] computes the convex hull of n data points in a d-dimensional space with a computational complexity of O(nbd=2cþ1), where bxc is the highest integer lower or equal than x and n is the number of samples. The complexity of the method presented in Ref. [37] is even higher, since the temperature of the simulated annealing algorithm used shall follow a log( ) law [39] to assure convergence (in probability) to the desired solution. Aiming at a lower computational complexity, some algorithms such as the pixel purity index (PPI) [35] and the N-FINDR [40] still find the minimum volume simplex containing the data cloud, but they assume the presence of at least one pure pixel of each endmember in the data. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. PPI algorithm uses the minimum noise fraction (MNF) [41] as a preprocessing step to reduce dimensionality and to improve the signal-to-noise ratio (SNR). The algorithm then projects every spectral vector onto skewers (large number of random vectors) [35, 42,43]. The points corresponding to extremes, for each skewer direction, are stored. A cumulative account records the number of times each pixel (i.e., a given spectral vector) is found to be an extreme. The pixels with the highest scores are the purest ones. N-FINDR algorithm [40] is based on the fact that in p spectral dimensions, the p-volume defined by a simplex formed by the purest pixels is larger than any other volume defined by any other combination of pixels. This algorithm finds the set of pixels defining the largest volume by inflating a simplex inside the data. ORA SIS [44, 45] is a hyperspectral framework developed by the U.S. Naval Research Laboratory consisting of several algorithms organized in six modules: exemplar selector, adaptative learner, demixer, knowledge base or spectral library, and spatial postrocessor. The first step consists in flat-fielding the spectra. Next, the exemplar selection module is used to select spectral vectors that best represent the smaller convex cone containing the data. The other pixels are rejected when the spectral angle distance (SAD) is less than a given thresh old. The procedure finds the basis for a subspace of a lower dimension using a modified Gram–Schmidt orthogonalizati on. The selected vectors are then projected onto this subspace and a simplex is found by an MV T pro cess. ORA SIS is oriented to real-time target detection from uncrewed air vehicles using hyperspectral data [46]. In this chapter we develop a new algorithm to unmix linear mixtures of endmember spectra. First, the algorithm determines the number of endmembers and the signal subspace using a newly developed concept [47, 48]. Second, the algorithm extracts the most pure pixels present in the data. Unlike other methods, this algorithm is completely automatic and unsupervised. To estimate the number of endmembers and the signal subspace in hyperspectral linear mixtures, the proposed scheme begins by estimating sign al and noise correlation matrices. The latter is based on multiple regression theory. The signal subspace is then identified by selectin g the set of signal eigenvalue s that best represents the data, in the least-square sense [48,49 ], we note, however, that VCA works with projected and with unprojected data. The extraction of the end members exploits two facts: (1) the endmembers are the vertices of a simplex and (2) the affine transformation of a simplex is also a simplex. As PPI and N-FIND R algorithms, VCA also assumes the presence of pure pixels in the data. The algorithm iteratively projects data on to a direction orthogonal to the subspace spanned by the endmembers already determined. The new end member signature corresponds to the extreme of the projection. The algorithm iterates until all end members are exhausted. VCA performs much better than PPI and better than or comparable to N-FI NDR; yet it has a computational complexity between on e and two orders of magnitude lower than N-FINDR. The chapter is structure d as follows. Section 19.2 describes the fundamentals of the proposed method. Section 19.3 and Section 19.4 evaluate the proposed algorithm using simulated and real data, respectively. Section 19.5 presents some concluding remarks.
Resumo:
The concepts and instruments required for the teaching and learning of geometric optics are introduced in the didactic processwithout a proper didactic transposition. This claim is secured by the ample evidence of both wide- and deep-rooted alternative concepts on the topic. Didactic transposition is a theory that comes from a reflection on the teaching and learning process in mathematics but has been used in other disciplinary fields. It will be used in this work in order to clear up the main obstacles in the teachinglearning process of geometric optics. We proceed to argue that since Newton’s approach to optics, in his Book I of Opticks, is independent of the corpuscular or undulatory nature of light, it is the most suitable for a constructivist learning environment. However, Newton’s theory must be subject to a proper didactic transposition to help overcome the referred alternative concepts. Then is described our didactic transposition in order to create knowledge to be taught using a dialogical process between students’ previous knowledge, history of optics and the desired outcomes on geometrical optics in an elementary pre-service teacher training course. Finally, we use the scheme-facet structure of knowledge both to analyse and discuss our results as well as to illuminate shortcomings that must be addressed in our next stage of the inquiry.
Resumo:
Power laws, also known as Pareto-like laws or Zipf-like laws, are commonly used to explain a variety of real world distinct phenomena, often described merely by the produced signals. In this paper, we study twelve cases, namely worldwide technological accidents, the annual revenue of America׳s largest private companies, the number of inhabitants in America׳s largest cities, the magnitude of earthquakes with minimum moment magnitude equal to 4, the total burned area in forest fires occurred in Portugal, the net worth of the richer people in America, the frequency of occurrence of words in the novel Ulysses, by James Joyce, the total number of deaths in worldwide terrorist attacks, the number of linking root domains of the top internet domains, the number of linking root domains of the top internet pages, the total number of human victims of tornadoes occurred in the U.S., and the number of inhabitants in the 60 most populated countries. The results demonstrate the emergence of statistical characteristics, very close to a power law behavior. Furthermore, the parametric characterization reveals complex relationships present at higher level of description.
Resumo:
As ligações adesivas são frequentemente utilizadas na fabricação de estruturas complexas que não poderiam ou não seriam tão fáceis de ser fabricadas numa só peça, a fim de proporcionar uma união estrutural que, teoricamente, deve ser pelo menos tão resistente como o material de base. As juntas adesivas têm vindo a substituir métodos como a soldadura, e ligações parafusadas e rebitadas, devido à facilidade de fabricação, menor custo, facilidade em unir materiais diferentes, melhor resistência, entre outras características. Os materiais compósitos reforçados com fibra de carbono são amplamente utilizados em muitas indústrias, tais como de construção de barcos, automóvel e aeronáutica, sendo usados em estruturas que requerem elevada resistência e rigidez específicas, o que reduz o peso dos componentes, mantendo a resistência e rigidez necessárias para suportar as diversas cargas aplicadas. Embora estes métodos de fabricação reduzam ao máximo as ligações através de técnicas de fabrico avançadas, estas ainda são necessárias devido ao tamanho dos componentes, limitações de projecto tecnológicas e logísticas. Em muitas estruturas, a combinação de compósitos com metais tais como alumínio ou titânio traz vantagens de projecto. Este trabalho tem como objectivo estudar, experimentalmente e por modelos de dano coesivo (MDC), juntas adesivas em L entre componentes de alumínio e compósito de carbono epóxido quando solicitados a forças de arrancamento, considerando diferentes configurações de junta e adesivos de ductilidade distinta. Os parâmetros geométricos abordados são a espessura do aderente de alumínio (tP2) e comprimento de sobreposição (LO). A análise numérica permitiu o estudo da distribuição das tensões, evolução do dano, resistência e modos de rotura. Os testes experimentais validam os resultados numéricos e fornecem mecanismos de projecto para juntas em L. Foi mostrado que a geometria do aderente em L (alumínio) e o tipo de adesivo têm uma influência directa na resistência de junta.
Resumo:
This work aims to characterize levels and phase distribution of polycyclic aromatic hydrocarbons (PAHs) in indoor air of preschool environment and to assess the impact of outdoor PAH emissions to indoor environment. Gaseous and particulate (PM1 and PM2.5) PAHs (16 USEPA priority pollutants, plus dibenzo[a,l]pyrene, and benzo[j]fluoranthene) were concurrently sampled indoors and outdoors in one urban preschool located in north of Portugal for 35 days. The total concentration of 18 PAHs (ΣPAHs) in indoor air ranged from 19.5 to 82.0 ng/m3; gaseous compounds (range of 14.1–66.1 ng/m3) accounted for 85% ΣPAHs. Particulate PAHs (range 0.7–15.9 ng/m3) were predominantly associated with PM1 (76% particulate ΣPAHs) with 5-ring PAHs being the most abundant. Mean indoor/outdoor ratios (I/O) of individual PAHs indicated that outdoor emissions significantly contributed to PAH indoors; emissions from motor vehicles and fuel burning were the major sources.
Resumo:
Dissertation submitted in partial fulfillment of the requirements for the Degree of Master of Science in Geospatial Technologies.
Resumo:
An important aspect of tropical medicine is analysis of geographic aspects of risk of disease transmission, which for lack of detailed public health data must often be reduced to an understanding of the distributions of critical species such as vectors and reservoirs. We examine the applicability of a new technique, ecological niche modeling, to the challenge of understanding distributions of such species based on municipalities in the state of São Paulo in which a group of 5 Lutzomyia sandfly species have been recorded. The technique, when tested based on independent occurrence data, yielded highly significant predictions of species' distributions; minimum sample sizes for effective predictions were around 40 municipalities.
Resumo:
RESUMO: O cancro de mama e o mais frequente diagnoticado a indiv duos do sexo feminino. O conhecimento cientifico e a tecnologia tem permitido a cria ção de muitas e diferentes estrat egias para tratar esta patologia. A Radioterapia (RT) est a entre as diretrizes atuais para a maioria dos tratamentos de cancro de mama. No entanto, a radia ção e como uma arma de dois canos: apesar de tratar, pode ser indutora de neoplasias secund arias. A mama contralateral (CLB) e um orgão susceptivel de absorver doses com o tratamento da outra mama, potenciando o risco de desenvolver um tumor secund ario. Nos departamentos de radioterapia tem sido implementadas novas tecnicas relacionadas com a radia ção, com complexas estrat egias de administra ção da dose e resultados promissores. No entanto, algumas questões precisam de ser devidamente colocadas, tais como: E seguro avançar para tecnicas complexas para obter melhores indices de conformidade nos volumes alvo, em radioterapia de mama? O que acontece aos volumes alvo e aos tecidos saudaveis adjacentes? Quão exata e a administração de dose? Quais são as limitações e vantagens das técnicas e algoritmos atualmente usados? A resposta a estas questões e conseguida recorrendo a m etodos de Monte Carlo para modelar com precisão os diferentes componentes do equipamento produtor de radia ção(alvos, ltros, colimadores, etc), a m de obter uma descri cão apropriada dos campos de radia cão usados, bem como uma representa ção geometrica detalhada e a composição dos materiais que constituem os orgãos e os tecidos envolvidos. Este trabalho visa investigar o impacto de tratar cancro de mama esquerda usando diferentes tecnicas de radioterapia f-IMRT (intensidade modulada por planeamento direto), IMRT por planeamento inverso (IMRT2, usando 2 feixes; IMRT5, com 5 feixes) e DCART (arco conformacional dinamico) e os seus impactos em irradia ção da mama e na irradia ção indesejada dos tecidos saud aveis adjacentes. Dois algoritmos do sistema de planeamento iPlan da BrainLAB foram usados: Pencil Beam Convolution (PBC) e Monte Carlo comercial iMC. Foi ainda usado um modelo de Monte Carlo criado para o acelerador usado (Trilogy da VARIAN Medical Systems), no c odigo EGSnrc MC, para determinar as doses depositadas na mama contralateral. Para atingir este objetivo foi necess ario modelar o novo colimador multi-laminas High- De nition que nunca antes havia sido simulado. O modelo desenvolvido est a agora disponí vel no pacote do c odigo EGSnrc MC do National Research Council Canada (NRC). O acelerador simulado foi validado com medidas realizadas em agua e posteriormente com c alculos realizados no sistema de planeamento (TPS).As distribui ções de dose no volume alvo (PTV) e a dose nos orgãos de risco (OAR) foram comparadas atrav es da an alise de histogramas de dose-volume; an alise estati stica complementar foi realizadas usando o software IBM SPSS v20. Para o algoritmo PBC, todas as tecnicas proporcionaram uma cobertura adequada do PTV. No entanto, foram encontradas diferen cas estatisticamente significativas entre as t ecnicas, no PTV, nos OAR e ainda no padrão da distribui ção de dose pelos tecidos sãos. IMRT5 e DCART contribuem para maior dispersão de doses baixas pelos tecidos normais, mama direita, pulmão direito, cora cão e at e pelo pulmão esquerdo, quando comparados com as tecnicas tangenciais (f-IMRT e IMRT2). No entanto, os planos de IMRT5 melhoram a distribuição de dose no PTV apresentando melhor conformidade e homogeneidade no volume alvo e percentagens de dose mais baixas nos orgãos do mesmo lado. A t ecnica de DCART não apresenta vantagens comparativamente com as restantes t ecnicas investigadas. Foram tamb em identi cadas diferen cas entre os algoritmos de c alculos: em geral, o PBC estimou doses mais elevadas para o PTV, pulmão esquerdo e cora ção, do que os algoritmos de MC. Os algoritmos de MC, entre si, apresentaram resultados semelhantes (com dferen cas at e 2%). Considera-se que o PBC não e preciso na determina ção de dose em meios homog eneos e na região de build-up. Nesse sentido, atualmente na cl nica, a equipa da F sica realiza medi ções para adquirir dados para outro algoritmo de c alculo. Apesar de melhor homogeneidade e conformidade no PTV considera-se que h a um aumento de risco de cancro na mama contralateral quando se utilizam t ecnicas não-tangenciais. Os resultados globais dos estudos apresentados confirmam o excelente poder de previsão com precisão na determinação e c alculo das distribui ções de dose nos orgãos e tecidos das tecnicas de simulação de Monte Carlo usados.---------ABSTRACT:Breast cancer is the most frequent in women. Scienti c knowledge and technology have created many and di erent strategies to treat this pathology. Radiotherapy (RT) is in the actual standard guidelines for most of breast cancer treatments. However, radiation is a two-sword weapon: although it may heal cancer, it may also induce secondary cancer. The contralateral breast (CLB) is a susceptible organ to absorb doses with the treatment of the other breast, being at signi cant risk to develop a secondary tumor. New radiation related techniques, with more complex delivery strategies and promising results are being implemented and used in radiotherapy departments. However some questions have to be properly addressed, such as: Is it safe to move to complex techniques to achieve better conformation in the target volumes, in breast radiotherapy? What happens to the target volumes and surrounding healthy tissues? How accurate is dose delivery? What are the shortcomings and limitations of currently used treatment planning systems (TPS)? The answers to these questions largely rely in the use of Monte Carlo (MC) simulations using state-of-the-art computer programs to accurately model the di erent components of the equipment (target, lters, collimators, etc.) and obtain an adequate description of the radiation elds used, as well as the detailed geometric representation and material composition of organs and tissues. This work aims at investigating the impact of treating left breast cancer using di erent radiation therapy (RT) techniques f-IMRT (forwardly-planned intensity-modulated), inversely-planned IMRT (IMRT2, using 2 beams; IMRT5, using 5 beams) and dynamic conformal arc (DCART) RT and their e ects on the whole-breast irradiation and in the undesirable irradiation of the surrounding healthy tissues. Two algorithms of iPlan BrainLAB TPS were used: Pencil Beam Convolution (PBC)and commercial Monte Carlo (iMC). Furthermore, an accurate Monte Carlo (MC) model of the linear accelerator used (a Trilogy R VARIANR) was done with the EGSnrc MC code, to accurately determine the doses that reach the CLB. For this purpose it was necessary to model the new High De nition multileaf collimator that had never before been simulated. The model developed was then included on the EGSnrc MC package of National Research Council Canada (NRC). The linac was benchmarked with water measurements and later on validated against the TPS calculations. The dose distributions in the planning target volume (PTV) and the dose to the organs at risk (OAR) were compared analyzing dose-volume histograms; further statistical analysis was performed using IBM SPSS v20 software. For PBC, all the techniques provided adequate coverage of the PTV. However, statistically significant dose di erences were observed between the techniques, in the PTV, OAR and also in the pattern of dose distribution spreading into normal tissues. IMRT5 and DCART spread low doses into greater volumes of normal tissue, right breast, right lung, heart and even the left lung than tangential techniques (f-IMRT and IMRT2). However,IMRT5 plans improved distributions for the PTV, exhibiting better conformity and homogeneity in target and reduced high dose percentages in ipsilateral OAR. DCART did not present advantages over any of the techniques investigated. Di erences were also found comparing the calculation algorithms: PBC estimated higher doses for the PTV, ipsilateral lung and heart than the MC algorithms predicted. The MC algorithms presented similar results (within 2% di erences). The PBC algorithm was considered not accurate in determining the dose in heterogeneous media and in build-up regions. Therefore, a major e ort is being done at the clinic to acquire data to move from PBC to another calculation algorithm. Despite better PTV homogeneity and conformity there is an increased risk of CLB cancer development, when using non-tangential techniques. The overall results of the studies performed con rm the outstanding predictive power and accuracy in the assessment and calculation of dose distributions in organs and tissues rendered possible by the utilization and implementation of MC simulation techniques in RT TPS.