958 resultados para 250604 Radiation and Matter


Relevância:

100.00% 100.00%

Publicador:

Resumo:

An early molecular response to DNA double-strand breaks (DSBs) is phosphorylation of the Ser-139 residue within the terminal SQEY motif of the histone H2AX1,2. This phosphorylation of H2AX is mediated by the phosphatidyl-inosito 3-kinase (PI3K) family of proteins, ataxia telangiectasia mutated (ATM), DNA-protein kinase catalytic subunit and ATM and RAD3-related (ATR)3. The phosphorylated form of H2AX, referred to as γH2AX, spreads to adjacent regions of chromatin from the site of the DSB, forming discrete foci, which are easily visualized by immunofluorecence microscopy3. Analysis and quantitation of γH2AX foci has been widely used to evaluate DSB formation and repair, particularly in response to ionizing radiation and for evaluating the efficacy of various radiation modifying compounds and cytotoxic compounds Given the exquisite specificity and sensitivity of this de novo marker of DSBs, it has provided new insights into the processes of DNA damage and repair in the context of chromatin. For example, in radiation biology the central paradigm is that the nuclear DNA is the critical target with respect to radiation sensitivity. Indeed, the general consensus in the field has largely been to view chromatin as a homogeneous template for DNA damage and repair. However, with the use of γH2AX as molecular marker of DSBs, a disparity in γ-irradiation-induced γH2AX foci formation in euchromatin and heterochromatin has been observed5-7. Recently, we used a panel of antibodies to either mono-, di- or tri- methylated histone H3 at lysine 9 (H3K9me1, H3K9me2, H3K9me3) which are epigenetic imprints of constitutive heterochromatin and transcriptional silencing and lysine 4 (H3K4me1, H3K4me2, H3K4me3), which are tightly correlated actively transcribing euchromatic regions, to investigate the spatial distribution of γH2AX following ionizing radiation8. In accordance with the prevailing ideas regarding chromatin biology, our findings indicated a close correlation between γH2AX formation and active transcription9. Here we demonstrate our immunofluorescence method for detection and quantitation of γH2AX foci in non-adherent cells, with a particular focus on co-localization with other epigenetic markers, image analysis and 3Dmodeling.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Androgens regulate biological pathways to promote proliferation, differentiation, and survival of benign and malignant prostate tissue. Androgen receptor (AR) targeted therapies exploit this dependence and are used in advanced prostate cancer to control disease progression. Contemporary treatment regimens involve sequential use of inhibitors of androgen synthesis or AR function. Although targeting the androgen axis has clear therapeutic benefit, its effectiveness is temporary, as prostate tumor cells adapt to survive and grow. The removal of androgens (androgen deprivation) has been shown to activate both epithelial-to-mesenchymal transition (EMT) and neuroendocrine transdifferentiation (NEtD) programs. EMT has established roles in promoting biological phenotypes associated with tumor progression (migration/invasion, tumor cell survival, cancer stem cell-like properties, resistance to radiation and chemotherapy) in multiple human cancer types. NEtD in prostate cancer is associated with resistance to therapy, visceral metastasis, and aggressive disease. Thus, activation of these programs via inhibition of the androgen axis provides a mechanism by which tumor cells can adapt to promote disease recurrence and progression. Brachyury, Axl, MEK, and Aurora kinase A are molecular drivers of these programs, and inhibitors are currently in clinical trials to determine therapeutic applications. Understanding tumor cell plasticity will be important in further defining the rational use of androgen-targeted therapies clinically and provides an opportunity for intervention to prolong survival of men with metastatic prostate cancer.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Brain asymmetry, or the structural and functional specialization of each brain hemisphere, has fascinated neuroscientists for over a century. Even so, genetic and environmental factors that influence brain asymmetry are largely unknown. Diffusion tensor imaging (DTI) now allows asymmetry to be studied at a microscopic scale by examining differences in fiber characteristics across hemispheres rather than differences in structure shapes and volumes. Here we analyzed 4. Tesla DTI scans from 374 healthy adults, including 60 monozygotic twin pairs, 45 same-sex dizygotic pairs, and 164 mixed-sex DZ twins and their siblings; mean age: 24.4 years ± 1.9 SD). All DTI scans were nonlinearly aligned to a geometrically-symmetric, population-based image template. We computed voxel-wise maps of significant asymmetries (left/right differences) for common diffusion measures that reflect fiber integrity (fractional and geodesic anisotropy; FA, GA and mean diffusivity, MD). In quantitative genetic models computed from all same-sex twin pairs (N=210 subjects), genetic factors accounted for 33% of the variance in asymmetry for the inferior fronto-occipital fasciculus, 37% for the anterior thalamic radiation, and 20% for the forceps major and uncinate fasciculus (all L > R). Shared environmental factors accounted for around 15% of the variance in asymmetry for the cortico-spinal tract (R > L) and about 10% for the forceps minor (L > R). Sex differences in asymmetry (men > women) were significant, and were greatest in regions with prominent FA asymmetries. These maps identify heritable DTI-derived features, and may empower genome-wide searches for genetic polymorphisms that influence brain asymmetry.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The properties of the manifold of a Lie groupG, fibered by the cosets of a sub-groupH, are exploited to obtain a geometrical description of gauge theories in space-timeG/H. Gauge potentials and matter fields are pullbacks of equivariant fields onG. Our concept of a connection is more restricted than that in the similar scheme of Ne'eman and Regge, so that its degrees of freedom are just those of a set of gauge potentials forG, onG/H, with no redundant components. The ldquotranslationalrdquo gauge potentials give rise in a natural way to a nonsingular tetrad onG/H. The underlying groupG to be gauged is the groupG of left translations on the manifoldG and is associated with a ldquotrivialrdquo connection, namely the Maurer-Cartan form. Gauge transformations are all those diffeomorphisms onG that preserve the fiber-bundle structure.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Maize is one of the most important crops in the world. The products generated from this crop are largely used in the starch industry, the animal and human nutrition sector, and biomass energy production and refineries. For these reasons, there is much interest in figuring the potential grain yield of maize genotypes in relation to the environment in which they will be grown, as the productivity directly affects agribusiness or farm profitability. Questions like these can be investigated with ecophysiological crop models, which can be organized according to different philosophies and structures. The main objective of this work is to conceptualize a stochastic model for predicting maize grain yield and productivity under different conditions of water supply while considering the uncertainties of daily climate data. Therefore, one focus is to explain the model construction in detail, and the other is to present some results in light of the philosophy adopted. A deterministic model was built as the basis for the stochastic model. The former performed well in terms of the curve shape of the above-ground dry matter over time as well as the grain yield under full and moderate water deficit conditions. Through the use of a triangular distribution for the harvest index and a bivariate normal distribution of the averaged daily solar radiation and air temperature, the stochastic model satisfactorily simulated grain productivity, i.e., it was found that 10,604 kg ha(-1) is the most likely grain productivity, very similar to the productivity simulated by the deterministic model and for the real conditions based on a field experiment. © 2012 American Society of Agricultural and Biological Engineers.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Tillering determines the plant size of sorghum (Sorghum bicolor) and an understanding of its regulation is important to match genotypes to prevalent growing conditions in target production environments. The aim of this study was to determine the physiological and environmental regulation of variability in tillering among sorghum genotypes, and to develop a framework for this regulation. * Diverse sorghum genotypes were grown in three experiments with contrasting temperature, radiation and plant density to create variation in tillering. Data on phenology, tillering, and leaf and plant size were collected. A carbohydrate supply/demand (S/D) index that incorporated environmental and genotypic parameters was developed to represent the effects of assimilate availability on tillering. Genotypic differences in tillering not explained by this index were defined as propensity to tiller (PTT) and probably represented hormonal effects. * Genotypic variation in tillering was associated with differences in leaf width, stem diameter and PTT. The S/D index captured most of the environmental effects on tillering and PTT most of the genotypic effects. * A framework that captures genetic and environmental regulation of tillering through assimilate availability and PTT was developed, and provides a basis for the development of a model that connects genetic control of tillering to its phenotypic consequences.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

By detecting leading protons produced in the Central Exclusive Diffractive process, p+p → p+X+p, one can measure the missing mass, and scan for possible new particle states such as the Higgs boson. This process augments - in a model independent way - the standard methods for new particle searches at the Large Hadron Collider (LHC) and will allow detailed analyses of the produced central system, such as the spin-parity properties of the Higgs boson. The exclusive central diffractive process makes possible precision studies of gluons at the LHC and complements the physics scenarios foreseen at the next e+e− linear collider. This thesis first presents the conclusions of the first systematic analysis of the expected precision measurement of the leading proton momentum and the accuracy of the reconstructed missing mass. In this initial analysis, the scattered protons are tracked along the LHC beam line and the uncertainties expected in beam transport and detection of the scattered leading protons are accounted for. The main focus of the thesis is in developing the necessary radiation hard precision detector technology for coping with the extremely demanding experimental environment of the LHC. This will be achieved by using a 3D silicon detector design, which in addition to the radiation hardness of up to 5×10^15 neutrons/cm2, offers properties such as a high signal-to- noise ratio, fast signal response to radiation and sensitivity close to the very edge of the detector. This work reports on the development of a novel semi-3D detector design that simplifies the 3D fabrication process, but conserves the necessary properties of the 3D detector design required in the LHC and in other imaging applications.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

There is a growing need to understand the exchange processes of momentum, heat and mass between an urban surface and the atmosphere as they affect our quality of life. Understanding the source/sink strengths as well as the mixing mechanisms of air pollutants is particularly important due to their effects on human health and climate. This work aims to improve our understanding of these surface-atmosphere interactions based on the analysis of measurements carried out in Helsinki, Finland. The vertical exchange of momentum, heat, carbon dioxide (CO2) and aerosol particle number was measured with the eddy covariance technique at the urban measurement station SMEAR III, where the concentrations of ultrafine, accumulation mode and coarse particle numbers, nitrogen oxides (NOx), carbon monoxide (CO), ozone (O3) and sulphur dioxide (SO2) were also measured. These measurements were carried out over varying measurement periods between 2004 and 2008. In addition, black carbon mass concentration was measured at the Helsinki Metropolitan Area Council site during three campaigns in 1996-2005. Thus, the analyzed dataset covered far, the most comprehensive long-term measurements of turbulent fluxes reported in the literature from urban areas. Moreover, simultaneously measured urban air pollution concentrations and turbulent fluxes were examined for the first time. The complex measurement surrounding enabled us to study the effect of different urban covers on the exchange processes from a single point of measurement. The sensible and latent heat fluxes closely followed the intensity of solar radiation, and the sensible heat flux always exceeded the latent heat flux due to anthropogenic heat emissions and the conversion of solar radiation to direct heat in urban structures. This urban heat island effect was most evident during winter nights. The effect of land use cover was seen as increased sensible heat fluxes in more built-up areas than in areas with high vegetation cover. Both aerosol particle and CO2 exchanges were largely affected by road traffic, and the highest diurnal fluxes reached 109 m-2 s-1 and 20 µmol m-2 s-1, respectively, in the direction of the road. Local road traffic had the greatest effect on ultrafine particle concentrations, whereas meteorological variables were more important for accumulation mode and coarse particle concentrations. The measurement surroundings of the SMEAR III station served as a source for both particles and CO2, except in summer, when the vegetation uptake of CO2 exceeded the anthropogenic sources in the vegetation sector in daytime, and we observed a downward median flux of 8 µmol m-2 s-1. This work improved our understanding of the interactions between an urban surface and the atmosphere in a city located at high latitudes in a semi-continental climate. The results can be utilised in urban planning, as the fraction of vegetation cover and vehicular activity were found to be the major environmental drivers affecting most of the exchange processes. However, in order to understand these exchange and mixing processes on a city scale, more measurements above various urban surfaces accompanied by numerical modelling are required.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Birds represent the most diverse extant tetrapod clade, with ca. 10,000 extant species, and the timing of the crown avian radiation remains hotly debated. The fossil record supports a primarily Cenozoic radiation of crown birds, whereas molecular divergence dating analyses generally imply that this radiation was well underway during the Cretaceous. Furthermore, substantial differences have been noted between published divergence estimates. These have been variously attributed to clock model, calibration regime, and gene type. One underappreciated phenomenon is that disparity between fossil ages and molecular dates tends to be proportionally greater for shallower nodes in the avian Tree of Life. Here, we explore potential drivers of disparity in avian divergence dates through a set of analyses applying various calibration strategies and coding methods to a mitochondrial genome dataset and an 18-gene nuclear dataset, both sampled across 72 taxa. Our analyses support the occurrence of two deep divergences (i.e., the Palaeognathae/Neognathae split and the Galloanserae/Neoaves split) well within the Cretaceous, followed by a rapid radiation of Neoaves near the K-Pg boundary. However, 95% highest posterior density intervals for most basal divergences in Neoaves cross the boundary, and we emphasize that, barring unreasonably strict prior distributions, distinguishing between a rapid Early Paleocene radiation and a Late Cretaceous radiation may be beyond the resolving power of currently favored divergence dating methods. In contrast to recent observations for placental mammals, constraining all divergences within Neoaves to occur in the Cenozoic does not result in unreasonably high inferred substitution rates. Comparisons of nuclear DNA (nDNA) versus mitochondrial DNA (mtDNA) datasets and NT- versus RY-coded mitochondrial data reveal patterns of disparity that are consistent with substitution model misspecifications that result in tree compression/tree extension artifacts, which may explain some discordance between previous divergence estimates based on different sequence types. Comparisons of fully calibrated and nominally calibrated trees support a correlation between body mass and apparent dating error. Overall, our results are consistent with (but do not require) a Paleogene radiation for most major clades of crown birds.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Radiation therapy (RT) plays currently significant role in curative treatments of several cancers. External beam RT is carried out mostly by using megavoltage beams of linear accelerators. Tumor eradication and normal tissue complications correlate to dose absorbed in tissues. Normally this dependence is steep and it is crucial that actual dose within patient accurately correspond to the planned dose. All factors in a RT procedure contain uncertainties requiring strict quality assurance. From hospital physicist´s point of a view, technical quality control (QC), dose calculations and methods for verification of correct treatment location are the most important subjects. Most important factor in technical QC is the verification that radiation production of an accelerator, called output, is within narrow acceptable limits. The output measurements are carried out according to a locally chosen dosimetric QC program defining measurement time interval and action levels. Dose calculation algorithms need to be configured for the accelerators by using measured beam data. The uncertainty of such data sets limits for best achievable calculation accuracy. All these dosimetric measurements require good experience, are workful, take up resources needed for treatments and are prone to several random and systematic sources of errors. Appropriate verification of treatment location is more important in intensity modulated radiation therapy (IMRT) than in conventional RT. This is due to steep dose gradients produced within or close to healthy tissues locating only a few millimetres from the targeted volume. The thesis was concentrated in investigation of the quality of dosimetric measurements, the efficacy of dosimetric QC programs, the verification of measured beam data and the effect of positional errors on the dose received by the major salivary glands in head and neck IMRT. A method was developed for the estimation of the effect of the use of different dosimetric QC programs on the overall uncertainty of dose. Data were provided to facilitate the choice of a sufficient QC program. The method takes into account local output stability and reproducibility of the dosimetric QC measurements. A method based on the model fitting of the results of the QC measurements was proposed for the estimation of both of these factors. The reduction of random measurement errors and optimization of QC procedure were also investigated. A method and suggestions were presented for these purposes. The accuracy of beam data was evaluated in Finnish RT centres. Sufficient accuracy level was estimated for the beam data. A method based on the use of reference beam data was developed for the QC of beam data. Dosimetric and geometric accuracy requirements were evaluated for head and neck IMRT when function of the major salivary glands is intended to be spared. These criteria are based on the dose response obtained for the glands. Random measurement errors could be reduced enabling lowering of action levels and prolongation of measurement time interval from 1 month to even 6 months simultaneously maintaining dose accuracy. The combined effect of the proposed methods, suggestions and criteria was found to facilitate the avoidance of maximal dose errors of up to even about 8 %. In addition, their use may make the strictest recommended overall dose accuracy level of 3 % (1SD) achievable.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Diagnostic radiology represents the largest man-made contribution to population radiation doses in Europe. To be able to keep the diagnostic benefit versus radiation risk ratio as high as possible, it is important to understand the quantitative relationship between the patient radiation dose and the various factors which affect the dose, such as the scan parameters, scan mode, and patient size. Paediatric patients have a higher probability for late radiation effects, since longer life expectancy is combined with the higher radiation sensitivity of the developing organs. The experience with particular paediatric examinations may be very limited and paediatric acquisition protocols may not be optimised. The purpose of this thesis was to enhance and compare different dosimetric protocols, to promote the establishment of the paediatric diagnostic reference levels (DRLs), and to provide new data on patient doses for optimisation purposes in computed tomography (with new applications for dental imaging) and in paediatric radiography. Large variations in radiation exposure in paediatric skull, sinus, chest, pelvic and abdominal radiography examinations were discovered in patient dose surveys. There were variations between different hospitals and examination rooms, between different sized patients, and between imaging techniques; emphasising the need for harmonisation of the examination protocols. For computed tomography, a correction coefficient, which takes individual patient size into account in patient dosimetry, was created. The presented patient size correction method can be used for both adult and paediatric purposes. Dental cone beam CT scanners provided adequate image quality for dentomaxillofacial examinations while delivering considerably smaller effective doses to patient compared to the multi slice CT. However, large dose differences between cone beam CT scanners were not explained by differences in image quality, which indicated the lack of optimisation. For paediatric radiography, a graphical method was created for setting the diagnostic reference levels in chest examinations, and the DRLs were given as a function of patient projection thickness. Paediatric DRLs were also given for sinus radiography. The detailed information about the patient data, exposure parameters and procedures provided tools for reducing the patient doses in paediatric radiography. The mean tissue doses presented for paediatric radiography enabled future risk assessments to be done. The calculated effective doses can be used for comparing different diagnostic procedures, as well as for comparing the use of similar technologies and procedures in different hospitals and countries.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Acceleration of the universe has been established but not explained. During the past few years precise cosmological experiments have confirmed the standard big bang scenario of a flat universe undergoing an inflationary expansion in its earliest stages, where the perturbations are generated that eventually form into galaxies and other structure in matter, most of which is non-baryonic dark matter. Curiously, the universe has presently entered into another period of acceleration. Such a result is inferred from observations of extra-galactic supernovae and is independently supported by the cosmic microwave background radiation and large scale structure data. It seems there is a positive cosmological constant speeding up the universal expansion of space. Then the vacuum energy density the constant describes should be about a dozen times the present energy density in visible matter, but particle physics scales are enormously larger than that. This is the cosmological constant problem, perhaps the greatest mystery of contemporary cosmology. In this thesis we will explore alternative agents of the acceleration. Generically, such are called dark energy. If some symmetry turns off vacuum energy, its value is not a problem but one needs some dark energy. Such could be a scalar field dynamically evolving in its potential, or some other exotic constituent exhibiting negative pressure. Another option is to assume that gravity at cosmological scales is not well described by general relativity. In a modified theory of gravity one might find the expansion rate increasing in a universe filled by just dark matter and baryons. Such possibilities are taken here under investigation. The main goal is to uncover observational consequences of different models of dark energy, the emphasis being on their implications for the formation of large-scale structure of the universe. Possible properties of dark energy are investigated using phenomenological paramaterizations, but several specific models are also considered in detail. Difficulties in unifying dark matter and dark energy into a single concept are pointed out. Considerable attention is on modifications of gravity resulting in second order field equations. It is shown that in a general class of such models the viable ones represent effectively the cosmological constant, while from another class one might find interesting modifications of the standard cosmological scenario yet allowed by observations. The thesis consists of seven research papers preceded by an introductory discussion.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Polar Regions are an energy sink of the Earth system, as the Sun rays do not reach the Poles for half of the year, and hit them only at very low angles for the other half of the year. In summer, solar radiation is the dominant energy source for the Polar areas, therefore even small changes in the surface albedo strongly affect the surface energy balance and, thus, the speed and amount of snow and ice melting. In winter, the main heat sources for the atmosphere are the cyclones approaching from lower latitudes, and the atmosphere-surface heat transfer takes place through turbulent mixing and longwave radiation, the latter dominated by clouds. The aim of this thesis is to improve the knowledge about the surface and atmospheric processes that control the surface energy budget over snow and ice, with particular focus on albedo during the spring and summer seasons, on horizontal advection of heat, cloud longwave forcing, and turbulent mixing during the winter season. The critical importance of a correct albedo representation in models is illustrated through the analysis of the causes for the errors in the surface and near-surface air temperature produced in a short-range numerical weather forecast by the HIRLAM model. Then, the daily and seasonal variability of snow and ice albedo have been examined by analysing field measurements of albedo, carried out in different environments. On the basis of the data analysis, simple albedo parameterizations have been derived, which can be implemented into thermodynamic sea ice models, as well as numerical weather prediction and climate models. Field measurements of radiation and turbulent fluxes over the Bay of Bothnia (Baltic Sea) also allowed examining the impact of a large albedo change during the melting season on surface energy and ice mass budgets. When high contrasts in surface albedo are present, as in the case of snow covered areas next to open water, the effect of the surface albedo heterogeneity on the downwelling solar irradiance under overcast condition is very significant, although it is usually not accounted for in single column radiative transfer calculations. To account for this effect, an effective albedo parameterization based on three-dimensional Monte Carlo radiative transfer calculations has been developed. To test a potentially relevant application of the effective albedo parameterization, its performance in the ground-based retrieval of cloud optical depth was illustrated. Finally, the factors causing the large variations of the surface and near-surface temperatures over the Central Arctic during winter were examined. The relative importance of cloud radiative forcing, turbulent mixing, and lateral heat advection on the Arctic surface temperature were quantified through the analysis of direct observations from Russian drifting ice stations, with the lateral heat advection calculated from reanalysis products.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Solar UV radiation is harmful for life on planet Earth, but fortunately the atmospheric oxygen and ozone absorb almost entirely the most energetic UVC radiation photons. However, part of the UVB radiation and much of the UVA radiation reaches the surface of the Earth, and affect human health, environment, materials and drive atmospheric and aquatic photochemical processes. In order to quantify these effects and processes there is a need for ground-based UV measurements and radiative transfer modeling to estimate the amounts of UV radiation reaching the biosphere. Satellite measurements with their near-global spatial coverage and long-term data conti-nuity offer an attractive option for estimation of the surface UV radiation. This work focuses on radiative transfer theory based methods used for estimation of the UV radiation reaching the surface of the Earth. The objectives of the thesis were to implement the surface UV algorithm originally developed at NASA Goddard Space Flight Center for estimation of the surface UV irradiance from the meas-urements of the Dutch-Finnish built Ozone Monitoring Instrument (OMI), to improve the original surface UV algorithm especially in relation with snow cover, to validate the OMI-derived daily surface UV doses against ground-based measurements, and to demonstrate how the satellite-derived surface UV data can be used to study the effects of the UV radiation. The thesis consists of seven original papers and a summary. The summary includes an introduction of the OMI instrument, a review of the methods used for modeling of the surface UV using satellite data as well as the con-clusions of the main results of the original papers. The first two papers describe the algorithm used for estimation of the surface UV amounts from the OMI measurements as well as the unique Very Fast Delivery processing system developed for processing of the OMI data received at the Sodankylä satellite data centre. The third and the fourth papers present algorithm improvements related to the surface UV albedo of the snow-covered land. Fifth paper presents the results of the comparison of the OMI-derived daily erythemal doses with those calculated from the ground-based measurement data. It gives an estimate of the expected accuracy of the OMI-derived sur-face UV doses for various atmospheric and other conditions, and discusses the causes of the differences between the satellite-derived and ground-based data. The last two papers demonstrate the use of the satellite-derived sur-face UV data. Sixth paper presents an assessment of the photochemical decomposition rates in aquatic environment. Seventh paper presents use of satellite-derived daily surface UV doses for planning of the outdoor material weathering tests.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The main method of modifying properties of semiconductors is to introduce small amount of impurities inside the material. This is used to control magnetic and optical properties of materials and to realize p- and n-type semiconductors out of intrinsic material in order to manufacture fundamental components such as diodes. As diffusion can be described as random mixing of material due to thermal movement of atoms, it is essential to know the diffusion behavior of the impurities in order to manufacture working components. In modified radiotracer technique diffusion is studied using radioactive isotopes of elements as tracers. The technique is called modified as atoms are deployed inside the material by ion beam implantation. With ion implantation, a distinct distribution of impurities can be deployed inside the sample surface with good con- trol over the amount of implanted atoms. As electromagnetic radiation and other nuclear decay products emitted by radioactive materials can be easily detected, only very low amount of impurities can be used. This makes it possible to study diffusion in pure materials without essentially modifying the initial properties by doping. In this thesis a modified radiotracer technique is used to study the diffusion of beryllium in GaN, ZnO, SiGe and glassy carbon. GaN, ZnO and SiGe are of great interest to the semiconductor industry and beryllium as a small and possibly rapid dopant hasn t been studied previously using the technique. Glassy carbon has been added to demonstrate the feasibility of the technique. In addition, the diffusion of magnetic impurities, Mn and Co, has been studied in GaAs and ZnO (respectively) with spintronic applications in mind.