903 resultados para Simulated environment (Teaching method)
Resumo:
An article by Grandin sharing tips for teaching and working with autistic children. The focus is on: Structured Environment, Learning to Talk, Rhythm, Sensory Problems, Reducing Arousal, Tactile Stimulation, Fixations, Visual Thinking. The conclusion of the article reads "I cannot over emphasize the important role that good teachers and therapists play in enabling autistics to lead a fuller life. A good autism program needs dedicated people and should use a variety of treatment methods in combination with an intense structured environment".
Resumo:
Photosynthesis is a process in which electromagnetic radiation is converted into chemical energy. Photosystems capture photons with chromophores and transfer their energy to reaction centers using chromophores as a medium. In the reaction center, the excitation energy is used to perform chemical reactions. Knowledge of chromophore site energies is crucial to the understanding of excitation energy transfer pathways in photosystems and the ability to compute the site energies in a fast and accurate manner is mandatory for investigating how protein dynamics ef-fect the site energies and ultimately energy pathways with time. In this work we developed two software frameworks designed to optimize the calculations of chro-mophore site energies within a protein environment. The first is for performing quantum mechanical energy optimizations on molecules and the second is for com-puting site energies of chromophores in a fast and accurate manner using the polar-izability embedding method. The two frameworks allow for the fast and accurate calculation of chromophore site energies within proteins, ultimately allowing for the effect of protein dynamics on energy pathways to be studied. We use these frame-works to compute the site energies of the eight chromophores in the reaction center of photosystem II (PSII) using a 1.9 Å resolution x-ray structure of photosystem II. We compare our results to conflicting experimental data obtained from both isolat-ed intact PSII core preparations and the minimal reaction center preparation of PSII, and find our work more supportive of the former.
Resumo:
Il est reconnu que le benzène, le toluène, l’éthylbenzène et les isomères du xylène, composés organiques volatils (COVs) communément désignés BTEX, produisent des effets nocifs sur la santé humaine et sur les végétaux dépendamment de la durée et des niveaux d’exposition. Le benzène en particulier est classé cancérogène et une exposition à des concentrations supérieures à 64 g/m3 de benzène peut être fatale en 5–10 minutes. Par conséquent, la mesure en temps réel des BTEX dans l’air ambiant est essentielle pour détecter rapidement un danger associé à leur émission dans l’air et pour estimer les risques potentiels pour les êtres vivants et pour l’environnement. Dans cette thèse, une méthode d’analyse en temps réel des BTEX dans l’air ambiant a été développée et validée. La méthode est basée sur la technique d’échantillonnage direct de l’air couplée avec la spectrométrie de masse en tandem utilisant une source d’ionisation chimique à pression atmosphérique (APCI-MS/MS directe). La validation analytique a démontré la sensibilité (limite de détection LDM 1–2 μg/m3), la précision (coefficient de variation CV < 10%), l’exactitude (exactitude > 95%) et la sélectivité de la méthode. Des échantillons d’air ambiant provenant d’un site d’enfouissement de déchets industriels et de divers garages d’entretien automobile ont été analysés par la méthode développée. La comparaison des résultats avec ceux obtenus par la technique de chromatographie gazeuse on-line couplée avec un détecteur à ionisation de flamme (GC-FID) a donné des résultats similaires. La capacité de la méthode pour l’évaluation rapide des risques potentiels associés à une exposition aux BTEX a été prouvée à travers une étude de terrain avec analyse de risque pour la santé des travailleurs dans trois garages d’entretien automobile et par des expériences sous atmosphères simulées. Les concentrations mesurées dans l’air ambiant des garages étaient de 8,9–25 µg/m3 pour le benzène, 119–1156 µg/m3 pour le toluène, 9–70 µg/m3 pour l’éthylbenzène et 45–347 µg/m3 pour les xylènes. Une dose quotidienne environnementale totale entre 1,46 10-3 et 2,52 10-3 mg/kg/jour a été déterminée pour le benzène. Le risque de cancer lié à l’exposition environnementale totale au benzène estimé pour les travailleurs étudiés se situait entre 1,1 10-5 et 1,8 10-5. Une nouvelle méthode APCI-MS/MS a été également développée et validée pour l’analyse directe de l’octaméthylcyclotétrasiloxane (D4) et le décaméthylcyclopentasiloxane (D5) dans l’air et les biogaz. Le D4 et le D5 sont des siloxanes cycliques volatils largement utilisés comme solvants dans les processus industriels et les produits de consommation à la place des COVs précurseurs d’ozone troposphérique tels que les BTEX. Leur présence ubiquitaire dans les échantillons d’air ambiant, due à l’utilisation massive, suscite un besoin d’études de toxicité. De telles études requièrent des analyses qualitatives et quantitatives de traces de ces composés. Par ailleurs, la présence de traces de ces substances dans un biogaz entrave son utilisation comme source d’énergie renouvelable en causant des dommages coûteux à l’équipement. L’analyse des siloxanes dans un biogaz s’avère donc essentielle pour déterminer si le biogaz nécessite une purification avant son utilisation pour la production d’énergie. La méthode développée dans cette étude possède une bonne sensibilité (LDM 4–6 μg/m3), une bonne précision (CV < 10%), une bonne exactitude (> 93%) et une grande sélectivité. Il a été également démontré qu’en utilisant cette méthode avec l’hexaméthyl-d18-disiloxane comme étalon interne, la détection et la quantification du D4 et du D5 dans des échantillons réels de biogaz peuvent être accomplies avec une meilleure sensibilité (LDM ~ 2 μg/m3), une grande précision (CV < 5%) et une grande exactitude (> 97%). Une variété d’échantillons de biogaz prélevés au site d’enfouissement sanitaire du Complexe Environnemental de Saint-Michel à Montréal a été analysée avec succès par cette nouvelle méthode. Les concentrations mesurées étaient de 131–1275 µg/m3 pour le D4 et 250–6226 µg/m3 pour le D5. Ces résultats représentent les premières données rapportées dans la littérature sur la concentration des siloxanes D4 et D5 dans les biogaz d’enfouissement en fonction de l’âge des déchets.
Resumo:
Les pays industrialisés comme le Canada doivent faire face au vieillissement de leur population. En particulier, la majorité des personnes âgées, vivant à domicile et souvent seules, font face à des situations à risques telles que des chutes. Dans ce contexte, la vidéosurveillance est une solution innovante qui peut leur permettre de vivre normalement dans un environnement sécurisé. L’idée serait de placer un réseau de caméras dans l’appartement de la personne pour détecter automatiquement une chute. En cas de problème, un message pourrait être envoyé suivant l’urgence aux secours ou à la famille via une connexion internet sécurisée. Pour un système bas coût, nous avons limité le nombre de caméras à une seule par pièce ce qui nous a poussé à explorer les méthodes monoculaires de détection de chutes. Nous avons d’abord exploré le problème d’un point de vue 2D (image) en nous intéressant aux changements importants de la silhouette de la personne lors d’une chute. Les données d’activités normales d’une personne âgée ont été modélisées par un mélange de gaussiennes nous permettant de détecter tout événement anormal. Notre méthode a été validée à l’aide d’une vidéothèque de chutes simulées et d’activités normales réalistes. Cependant, une information 3D telle que la localisation de la personne par rapport à son environnement peut être très intéressante pour un système d’analyse de comportement. Bien qu’il soit préférable d’utiliser un système multi-caméras pour obtenir une information 3D, nous avons prouvé qu’avec une seule caméra calibrée, il était possible de localiser une personne dans son environnement grâce à sa tête. Concrêtement, la tête de la personne, modélisée par une ellipsoide, est suivie dans la séquence d’images à l’aide d’un filtre à particules. La précision de la localisation 3D de la tête a été évaluée avec une bibliothèque de séquence vidéos contenant les vraies localisations 3D obtenues par un système de capture de mouvement (Motion Capture). Un exemple d’application utilisant la trajectoire 3D de la tête est proposée dans le cadre de la détection de chutes. En conclusion, un système de vidéosurveillance pour la détection de chutes avec une seule caméra par pièce est parfaitement envisageable. Pour réduire au maximum les risques de fausses alarmes, une méthode hybride combinant des informations 2D et 3D pourrait être envisagée.
Resumo:
One of the major concerns of scoliotic patients undergoing spinal correction surgery is the trunk's external appearance after the surgery. This paper presents a novel incremental approach for simulating postoperative trunk shape in scoliosis surgery. Preoperative and postoperative trunk shapes data were obtained using three-dimensional medical imaging techniques for seven patients with adolescent idiopathic scoliosis. Results of qualitative and quantitative evaluations, based on the comparison of the simulated and actual postoperative trunk surfaces, showed an adequate accuracy of the method. Our approach provides a candidate simulation tool to be used in a clinical environment for the surgery planning process.
Resumo:
Persistence of external trunk asymmetry after scoliosis surgical treatment is frequent and difficult to predict by clinicians. This is a significant problem considering that correction of the apparent deformity is a major factor of satisfaction for the patients. A simulation of the correction on the external appearance would allow the clinician to illustrate to the patient the potential result of the surgery and would help in deciding on a surgical strategy that could most improve his/her appearance. We describe a method to predict the scoliotic trunk shape after a spine surgical intervention. The capability of our method was evaluated using real data of scoliotic patients. Results of the qualitative evaluation were very promising and a quantitative evaluation based on the comparison of the simulated and the actual postoperative trunk surface showed an adequate accuracy for clinical assessment. The required short simulation time also makes our approach an eligible candidate for a clinical environment demanding interactive simulations.
Resumo:
The shift from print to digital information has a high impact on all components of the academic library system in India especially the users, services and the staff. Though information is considered as an important resource, the use of ICT tools to collect and disseminate information has been in a slow pace in majority of the University libraries This may be due to various factors like insufficient funds, inadequate staff trained in handling computers and software packages, administrative concerns etc. In Kerala, automation has been initiated in almost all University libraries using library automation software and is under different stages of completion. There are not much studies conducted about the effects of information communication technologies on the professional activities of library professionals in the universities in Kerala. It is important to evaluate whether progress in ICT has had any impact on the library profession in these highest educational institutions. The aim of the study is to assess whether the developments in information communication technologies have any influence on the library professionals’ professional development, and the need for further education and training in the profession and evaluate their skills in handling developments in ICT. The total population of the study is 252 including the permanently employed professional library staff in central libraries and departmental libraries in the main campuses of the universities under study. This is almost a census study of the defined population of users. The questionnaire method was adopted for collection of data for this study, supplemented by interviews of Librarians to gather additional information. Library Professionals have a positive approach towards ICT applications and services in Libraries, but majority do not have the opportunities to develop their skills and competencies in their work environment. To develop competitive personnel in a technologically advanced world, high priority must be given to develop competence in ICT applications, library management and soft skills in library professionals, by the University administrators and Library associations. Library science schools and teaching departments across the country have to take significant steps to revise library science curriculum, and incorporate significant changes to achieve the demands and challenges of library science profession.
Resumo:
One major component of power system operation is generation scheduling. The objective of the work is to develop efficient control strategies to the power scheduling problems through Reinforcement Learning approaches. The three important active power scheduling problems are Unit Commitment, Economic Dispatch and Automatic Generation Control. Numerical solution methods proposed for solution of power scheduling are insufficient in handling large and complex systems. Soft Computing methods like Simulated Annealing, Evolutionary Programming etc., are efficient in handling complex cost functions, but find limitation in handling stochastic data existing in a practical system. Also the learning steps are to be repeated for each load demand which increases the computation time.Reinforcement Learning (RL) is a method of learning through interactions with environment. The main advantage of this approach is it does not require a precise mathematical formulation. It can learn either by interacting with the environment or interacting with a simulation model. Several optimization and control problems have been solved through Reinforcement Learning approach. The application of Reinforcement Learning in the field of Power system has been a few. The objective is to introduce and extend Reinforcement Learning approaches for the active power scheduling problems in an implementable manner. The main objectives can be enumerated as:(i) Evolve Reinforcement Learning based solutions to the Unit Commitment Problem.(ii) Find suitable solution strategies through Reinforcement Learning approach for Economic Dispatch. (iii) Extend the Reinforcement Learning solution to Automatic Generation Control with a different perspective. (iv) Check the suitability of the scheduling solutions to one of the existing power systems.First part of the thesis is concerned with the Reinforcement Learning approach to Unit Commitment problem. Unit Commitment Problem is formulated as a multi stage decision process. Q learning solution is developed to obtain the optimwn commitment schedule. Method of state aggregation is used to formulate an efficient solution considering the minimwn up time I down time constraints. The performance of the algorithms are evaluated for different systems and compared with other stochastic methods like Genetic Algorithm.Second stage of the work is concerned with solving Economic Dispatch problem. A simple and straight forward decision making strategy is first proposed in the Learning Automata algorithm. Then to solve the scheduling task of systems with large number of generating units, the problem is formulated as a multi stage decision making task. The solution obtained is extended in order to incorporate the transmission losses in the system. To make the Reinforcement Learning solution more efficient and to handle continuous state space, a fimction approximation strategy is proposed. The performance of the developed algorithms are tested for several standard test cases. Proposed method is compared with other recent methods like Partition Approach Algorithm, Simulated Annealing etc.As the final step of implementing the active power control loops in power system, Automatic Generation Control is also taken into consideration.Reinforcement Learning has already been applied to solve Automatic Generation Control loop. The RL solution is extended to take up the approach of common frequency for all the interconnected areas, more similar to practical systems. Performance of the RL controller is also compared with that of the conventional integral controller.In order to prove the suitability of the proposed methods to practical systems, second plant ofNeyveli Thennal Power Station (NTPS IT) is taken for case study. The perfonnance of the Reinforcement Learning solution is found to be better than the other existing methods, which provide the promising step towards RL based control schemes for practical power industry.Reinforcement Learning is applied to solve the scheduling problems in the power industry and found to give satisfactory perfonnance. Proposed solution provides a scope for getting more profit as the economic schedule is obtained instantaneously. Since Reinforcement Learning method can take the stochastic cost data obtained time to time from a plant, it gives an implementable method. As a further step, with suitable methods to interface with on line data, economic scheduling can be achieved instantaneously in a generation control center. Also power scheduling of systems with different sources such as hydro, thermal etc. can be looked into and Reinforcement Learning solutions can be achieved.
Resumo:
Information communication technology (IC T) has invariably brought about fundamental changes in the way in which libraries gather. preserve and disseminate information. The study was carried out with an aim to estimate and compare the information seeking behaviour (ISB) of the academics of two prominent universities of Kerala in the context of advancements achieved through ICT. The study was motivated by the fast changing scenario of libraries with the proliferation of many high tech products and services. The main purpose of the study was to identify the chief source of information of the academics, and also to examine academics preference upon the form and format of information source. The study also tries to estimate the adequacy of the resources and services currently provided by the libraries.The questionnaire was the central instrument for data collection. An almost census method was adopted for data collection engaging various methods and tools for eliciting data.The total population of the study was 957, out of which questionnaire was distributed to 859 academics. 646 academics responded to the survey, of which 564 of them were sound responses. Data was coded and analysed using Statistical Package for Social Sciences (SPSS) software and also with the help of Microsofl Excel package. Various statistical techniques were engaged to analyse data. A paradigm shift is evident by the fact that academies push themselves towards information in internet i.e. they prefer electronic source to traditional source and the very shift is coupled itself with e-seeking of information. The study reveals that ISB of the academics is influenced priman'ly by personal factors and comparative analysis shows that the ISB ofthc academics is similar in both universities. The productivity of the academics was tested to dig up any relation with respect to their ISB, and it is found that productivity of the academics is extensively related with their ISB. Study also reveals that the users ofthe library are satisfied with the services provided but not with the sources and in conjunction, study also recommends ways and means to improve the existing library system.
Resumo:
This thesis presents the results of an investigation conducted for the development of a new type of feed horn antenna called "Simulated Scalar Feed". A schematic presentation of the work is given below. A review of the past important work done in the field of conventional/multimode electromagnetic horn antennas is presented in the first part of the second chapter. The work carried out on corrugated horns and surfaces are included in the second part of the review. In the third part, work on dielectric and dielectric loaded metal horns are reviewed. In all the parts of the review, special emphasis is given to theoretical design considerations. The methodology adopted for the experimental investigations is presented in the third chapter. The instrumentation utilized and thThis thesis presents the results of an investigation conducted for the development of a new type of feed horn antenna called "Simulated Scalar Feed". A schematic presentation of the work is given below. A review of the past important work done in the field of conventional/multimode electromagnetic horn antennas is presented in the first part of the second chapter. The work carried out on corrugated horns and surfaces are included in the second part of the review. In the third part, work on dielectric and dielectric loaded metal horns are reviewed. In all the parts of the review, special emphasis is given to theoretical design considerations. The methodology adopted for the experimental investigations is presented in the third chapter. The instrumentation utilized and the details of fabrication ofe details of fabrication of the new simulated scalar feed are described. The method of measurements of radiation characteristics of the antenna are also explained in this chapter. In the fourth chapter the outcome of the experimental results of the investigations carried out on horn antennas fabricated with different physical dimensions and different parameters for the E—plane boundary walls are highlighted. The theoretical explanation used to explain the experimental results is given in the fifth chapter of the thesis. A comparison between the experimental and the theoretical results is also presented in this chapter. In chapter six, the conclusions drawn from the experimental as well as the theoretical investigations are discussed. The advantages and features of the newly developed simulated scalar feed is examined in this chapter. Scope of further investigations in this field is also discussed at the end of this chapter.
Resumo:
In the past, natural resources were plentiful and people were scarce. But the situation is rapidly reversing. Our challenge is to find a way to balance human consumption and nature’s limited productivity in order to ensure that our communities are sustainable locally, regionally and globally. Kochi, the commercial capital of Kerala, South India and the second most important city next to Mumbai on the Western coast is a land having a wide variety of residential environments. Due to rapid population growth, changing lifestyles, food habits and living standards, institutional weaknesses, improper choice of technology and public apathy, the present pattern of the city can be classified as that of haphazard growth with typical problems characteristics of unplanned urban development. Ecological Footprint Analysis (EFA) is physical accounting method, developed by William Rees and M. Wackernagel, focusing on land appropriation using land as its “currency”. It provides a means for measuring and communicating human induced environmental impacts upon the planet. The aim of applying EFA to Kochi city is to quantify the consumption and waste generation of a population and to compare it with the existing biocapacity. By quantifying the ecological footprint we can formulate strategies to reduce the footprint and there by having a sustainable living. In this paper, an attempt is made to explore the tool Ecological Footprint Analysis and calculate and analyse the ecological footprint of the residential areas of Kochi city. The paper also discusses and analyses the waste footprint of the city. An attempt is also made to suggest strategies to reduce the footprint thereby making the city sustainable
Resumo:
Pollution of water with pesticides has become a threat to the man, material and environment. The pesticides released to the environment reach the water bodies through run off. Industrial wastewater from pesticide manufacturing industries contains pesticides at higher concentration and hence a major source of water pollution. Pesticides create a lot of health and environmental hazards which include diseases like cancer, liver and kidney disorders, reproductive disorders, fatal death, birth defects etc. Conventional wastewater treatment plants based on biological treatment are not efficient to remove these compounds to the desired level. Most of the pesticides are phyto-toxic i.e., they kill the microorganism responsible for the degradation and are recalcitrant in nature. Advanced oxidation process (AOP) is a class of oxidation techniques where hydroxyl radicals are employed for oxidation of pollutants. AOPs have the ability to totally mineralise the organic pollutants to CO2 and water. Different methods are employed for the generation of hydroxyl radicals in AOP systems. Acetamiprid is a neonicotinoid insecticide widely used to control sucking type insects on crops such as leafy vegetables, citrus fruits, pome fruits, grapes, cotton, ornamental flowers. It is now recommended as a substitute for organophosphorous pesticides. Since its use is increasing, its presence is increasingly found in the environment. It has high water solubility and is not easily biodegradable. It has the potential to pollute surface and ground waters. Here, the use of AOPs for the removal of acetamiprid from wastewater has been investigated. Five methods were selected for the study based on literature survey and preliminary experiments conducted. Fenton process, UV treatment, UV/ H2O2 process, photo-Fenton and photocatalysis using TiO2 were selected for study. Undoped TiO2 and TiO2 doped with Cu and Fe were prepared by sol-gel method. Characterisation of the prepared catalysts was done by X-ray diffraction, scanning electron microscope, differential thermal analysis and thermogravimetric analysis. Influence of major operating parameters on the removal of acetamiprid has been investigated. All the experiments were designed using central compoiste design (CCD) of response surface methodology (RSM). Model equations were developed for Fenton, UV/ H2O2, photo-Fenton and photocatalysis for predicting acetamiprid removal and total organic carbon (TOC) removal for different operating conditions. Quality of the models were analysed by statistical methods. Experimental validations were also done to confirm the quality of the models. Optimum conditions obtained by experiment were verified with that obtained using response optimiser. Fenton Process is the simplest and oldest AOP where hydrogen peroxide and iron are employed for the generation of hydroxyl radicals. Influence of H2O2 and Fe2+ on the acetamiprid removal and TOC removal by Fenton process were investigated and it was found that removal increases with increase in H2O2 and Fe2+ concentration. At an initial concentration of 50 mg/L acetamiprid, 200 mg/L H2O2 and 20 mg/L Fe2+ at pH 3 was found to be optimum for acetamiprid removal. For UV treatment effect of pH was studied and it was found that pH has not much effect on the removal rate. Addition of H2O2 to UV process increased the removal rate because of the hydroxyl radical formation due to photolyis of H2O2. An H2O2 concentration of 110 mg/L at pH 6 was found to be optimum for acetamiprid removal. With photo-Fenton drastic reduction in the treatment time was observed with 10 times reduction in the amount of reagents required. H2O2 concentration of 20 mg/L and Fe2+ concentration of 2 mg/L was found to be optimum at pH 3. With TiO2 photocatalysis improvement in the removal rate was noticed compared to UV treatment. Effect of Cu and Fe doping on the photocatalytic activity under UV light was studied and it was observed that Cu doping enhanced the removal rate slightly while Fe doping has decreased the removal rate. Maximum acetamiprid removal was observed for an optimum catalyst loading of 1000 mg/L and Cu concentration of 1 wt%. It was noticed that mineralisation efficiency of the processes is low compared to acetamiprid removal efficiency. This may be due to the presence of stable intermediate compounds formed during degradation Kinetic studies were conducted for all the treatment processes and it was found that all processes follow pseudo-first order kinetics. Kinetic constants were found out from the experimental data for all the processes and half lives were calculated. The rate of reaction was in the order, photo- Fenton>UV/ H2O2>Fenton> TiO2 photocatalysis>UV. Operating cost was calculated for the processes and it was found that photo-Fenton removes the acetamiprid at lowest operating cost in lesser time. A kinetic model was developed for photo-Fenton process using the elementary reaction data and mass balance equations for the species involved in the process. Variation of acetamiprid concentration with time for different H2O2 and Fe2+ concentration at pH 3 can be found out using this model. The model was validated by comparing the simulated concentration profiles with that obtained from experiments. This study established the viability of the selected AOPs for the removal of acetamiprid from wastewater. Of the studied AOPs photo- Fenton gives the highest removal efficiency with lowest operating cost within shortest time.
Resumo:
In many situations probability models are more realistic than deterministic models. Several phenomena occurring in physics are studied as random phenomena changing with time and space. Stochastic processes originated from the needs of physicists.Let X(t) be a random variable where t is a parameter assuming values from the set T. Then the collection of random variables {X(t), t ∈ T} is called a stochastic process. We denote the state of the process at time t by X(t) and the collection of all possible values X(t) can assume, is called state space
Resumo:
For the theoretical investigation of local phenomena (adsorption at surfaces, defects or impurities within a crystal, etc.) one can assume that the effects caused by the local disturbance are only limited to the neighbouring particles. With this model, that is well-known as cluster-approximation, an infinite system can be simulated by a much smaller segment of the surface (Cluster). The size of this segment varies strongly for different systems. Calculations to the convergence of bond distance and binding energy of an adsorbed aluminum atom on an Al(100)-surface showed that more than 100 atoms are necessary to get a sufficient description of surface properties. However with a full-quantummechanical approach these system sizes cannot be calculated because of the effort in computer memory and processor speed. Therefore we developed an embedding procedure for the simulation of surfaces and solids, where the whole system is partitioned in several parts which itsself are treated differently: the internal part (cluster), which is located near the place of the adsorbate, is calculated completely self-consistently and is embedded into an environment, whereas the influence of the environment on the cluster enters as an additional, external potential to the relativistic Kohn-Sham-equations. The basis of the procedure represents the density functional theory. However this means that the choice of the electronic density of the environment constitutes the quality of the embedding procedure. The environment density was modelled in three different ways: atomic densities; of a large prepended calculation without embedding transferred densities; bulk-densities (copied). The embedding procedure was tested on the atomic adsorptions of 'Al on Al(100) and Cu on Cu(100). The result was that if the environment is choices appropriately for the Al-system one needs only 9 embedded atoms to reproduce the results of exact slab-calculations. For the Cu-system first calculations without embedding procedures were accomplished, with the result that already 60 atoms are sufficient as a surface-cluster. Using the embedding procedure the same values with only 25 atoms were obtained. This means a substantial improvement if one takes into consideration that the calculation time increased cubically with the number of atoms. With the embedding method Infinite systems can be treated by molecular methods. Additionally the program code was extended by the possibility to make molecular-dynamic simulations. Now it is possible apart from the past calculations of fixed cores to investigate also structures of small clusters and surfaces. A first application we made with the adsorption of Cu on Cu(100). We calculated the relaxed positions of the atoms that were located close to the adsorption site and afterwards made the full-quantummechanical calculation of this system. We did that procedure for different distances to the surface. Thus a realistic adsorption process could be examined for the first time. It should be remarked that when doing the Cu reference-calculations (without embedding) we begun to parallelize the entire program code. Only because of this aspect the investigations for the 100 atomic Cu surface-clusters were possible. Due to the good efficiency of both the parallelization and the developed embedding procedure we will be able to apply the combination in future. This will help to work on more these areas it will be possible to bring in results of full-relativistic molecular calculations, what will be very interesting especially for the regime of heavy systems.
Resumo:
This work focuses on the analysis of the influence of environment on the relative biological effectiveness (RBE) of carbon ions on molecular level. Due to the high relevance of RBE for medical applications, such as tumor therapy, and radiation protection in space, DNA damages have been investigated in order to understand the biological efficiency of heavy ion radiation. The contribution of this study to the radiobiology research consists in the analysis of plasmid DNA damages induced by carbon ion radiation in biochemical buffer environments, as well as in the calculation of the RBE of carbon ions on DNA level by mean of scanning force microscopy (SFM). In order to study the DNA damages, besides the common electrophoresis method, a new approach has been developed by using SFM. The latter method allows direct visualisation and measurement of individual DNA fragments with an accuracy of several nanometres. In addition, comparison of the results obtained by SFM and agarose gel electrophoresis methods has been performed in the present study. Sparsely ionising radiation, such as X-rays, and densely ionising radiation, such as carbon ions, have been used to irradiate plasmid DNA in trishydroxymethylaminomethane (Tris buffer) and 4-(2-hydroxyethyl)-1-piperazineethanesulfonic acid (HEPES buffer) environments. These buffer environments exhibit different scavenging capacities for hydroxyl radical (HO0), which is produced by ionisation of water and plays the major role in the indirect DNA damage processes. Fragment distributions have been measured by SFM over a large length range, and as expected, a significantly higher degree of DNA damages was observed for increasing dose. Also a higher amount of double-strand breaks (DSBs) was observed after irradiation with carbon ions compared to X-ray irradiation. The results obtained from SFM measurements show that both types of radiation induce multiple fragmentation of the plasmid DNA in the dose range from D = 250 Gy to D = 1500 Gy. Using Tris environments at two different concentrations, a decrease of the relative biological effectiveness with the rise of Tris concentration was observed. This demonstrates the radioprotective behavior of the Tris buffer solution. In contrast, a lower scavenging capacity for all other free radicals and ions, produced by the ionisation of water, was registered in the case of HEPES buffer compared to Tris solution. This is reflected in the higher RBE values deduced from SFM and gel electrophoresis measurements after irradiation of the plasmid DNA in 20 mM HEPES environment compared to 92 mM Tris solution. These results show that HEPES and Tris environments play a major role on preventing the indirect DNA damages induced by ionising radiation and on the relative biological effectiveness of heavy ion radiation. In general, the RBE calculated from the SFM measurements presents higher values compared to gel electrophoresis data, for plasmids irradiated in all environments. Using a large set of data, obtained from the SFM measurements, it was possible to calculate the survive rate over a larger range, from 88% to 98%, while for gel electrophoresis measurements the survive rates have been calculated only for values between 96% and 99%. While the gel electrophoresis measurements provide information only about the percentage of plasmids DNA that suffered a single DSB, SFM can count the small plasmid fragments produced by multiple DSBs induced in a single plasmid. Consequently, SFM generates more detailed information regarding the amount of the induced DSBs compared to gel electrophoresis, and therefore, RBE can be calculated with more accuracy. Thus, SFM has been proven to be a more precise method to characterize on molecular level the DNA damage induced by ionizing radiations.