925 resultados para Scale not given.None


Relevância:

30.00% 30.00%

Publicador:

Resumo:

High-permittivity ("high-k") dielectric materials are used in the transistor gate stack in integrated circuits. As the thickness of silicon oxide dielectric reduces below 2 nm with continued downscaling, the leakage current because of tunnelling increases, leading to high power consumption and reduced device reliability. Hence, research concentrates on finding materials with high dielectric constant that can be easily integrated into a manufacturing process and show the desired properties as a thin film. Atomic layer deposition (ALD) is used practically to deposit high-k materials like HfO2, ZrO2, and Al2O3 as gate oxides. ALD is a technique for producing conformal layers of material with nanometer-scale thickness, used commercially in non-planar electronics and increasingly in other areas of science and technology. ALD is a type of chemical vapor deposition that depends on self-limiting surface chemistry. In ALD, gaseous precursors are allowed individually into the reactor chamber in alternating pulses. Between each pulse, inert gas is admitted to prevent gas phase reactions. This thesis provides a profound understanding of the ALD of oxides such as HfO2, showing how the chemistry affects the properties of the deposited film. Using multi-scale modelling of ALD, the kinetics of reactions at the growing surface is connected to experimental data. In this thesis, we use density functional theory (DFT) method to simulate more realistic models for the growth of HfO2 from Hf(N(CH3)2)4/H2O and HfCl4/H2O and for Al2O3 from Al(CH3)3/H2O.Three major breakthroughs are discovered. First, a new reaction pathway, ’multiple proton diffusion’, is proposed for the growth of HfO2 from Hf(N(CH3)2)4/H2O.1 As a second major breakthrough, a ’cooperative’ action between adsorbed precursors is shown to play an important role in ALD. By this we mean that previously-inert fragments can become reactive once sufficient molecules adsorb in their neighbourhood during either precursor pulse. As a third breakthrough, the ALD of HfO2 from Hf(N(CH3)2)4 and H2O is implemented for the first time into 3D on-lattice kinetic Monte-Carlo (KMC).2 In this integrated approach (DFT+KMC), retaining the accuracy of the atomistic model in the higher-scale model leads to remarkable breakthroughs in our understanding. The resulting atomistic model allows direct comparison with experimental techniques such as X-ray photoelectron spectroscopy and quartz crystal microbalance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

It has been suggested that the less than optimal levels of students’ immersion language “persist in part because immersion teachers lack systematic approaches for integrating language into their content instruction” (Tedick, Christian and Fortune, 2011, p.7). I argue that our current lack of knowledge regarding what immersion teachers think, know and believe and what immersion teachers’ actual ‘lived’ experiences are in relation to form-focused instruction (FFI) prevents us from fully understanding the key issues at the core of experiential immersion pedagogy and form-focused integration. FFI refers to “any planned or incidental instructional activity that is intended to induce language learners to pay attention to linguistic form” (Ellis, 2001b, p.1). The central aim of this research study is to critically examine the perspectives and practices of Irish-medium immersion (IMI) teachers in relation to FFI. The study ‘taps’ into the lived experiences of three IMI teachers in three different IMI school contexts and explores FFI from a classroom-based, teacher-informed perspective. Philosophical underpinnings of the interpretive paradigm and critical hermeneutical principles inform and guide the study. A multi-case study approach was adopted and data was gathered through classroom observation, video-stimulated recall and semistructured interviews. Findings revealed that the journey of ‘becoming’ an IMI teacher is shaped by a vast array of intricate variables. IMI teacher identity, implicit theories, stated beliefs, educational biographies and experiences, IMI school cultures and contexts as well as teacher knowledge and competence impacted on IMI teachers’ FFI perspectives and practices. An IMI content teacher identity reflected the teachers’ priorities as shaped by pedagogical challenges and their educational backgrounds. While research participants had clearly defined instructional beliefs and goals, their roadmap of how to actually accomplish these goals was far from clear. IMI teachers described the multitude of choices and pedagogical dilemmas they faced in integrating FFI into experiential pedagogy. Significant gaps in IMI teachers’ declarative knowledge about and competence in the immersion language were also reported. This research study increases our understanding of the complexity of the processes underlying and shaping FFI pedagogy in IMI education. Innovative FFI opportunities for professional development across the continuum of teacher education are outlined, a comprehensive evaluation of IMI is called for and areas for further research are delineated.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Colloidal photonic crystals have potential light manipulation applications including; fabrication of efficient lasers and LEDs, improved optical sensors and interconnects, and improving photovoltaic efficiencies. One road-block of colloidal selfassembly is their inherent defects; however, they can be manufactured cost effectively into large area films compared to micro-fabrication methods. This thesis investigates production of ‘large-area’ colloidal photonic crystals by sonication, under oil co-crystallization and controlled evaporation, with a view to reducing cracking and other defects. A simple monotonic Stöber particle synthesis method was developed producing silica particles in the range of 80 to 600nm in a single step. An analytical method assesses the quality of surface particle ordering in a semiquantitative manner was developed. Using fast Fourier transform (FFT) spot intensities, a grey scale symmetry area method, has been used to quantify the FFT profiles. Adding ultrasonic vibrations during film formation demonstrated large areas could be assembled rapidly, however film ordering suffered as a result. Under oil cocrystallisation results in the particles being bound together during film formation. While having potential to form large areas, it requires further refinement to be established as a production technique. Achieving high quality photonic crystals bonded with low concentrations (<5%) of polymeric adhesives while maintaining refractive index contrast, proved difficult and degraded the film’s uniformity. A controlled evaporation method, using a mixed solvent suspension, represents the most promising method to produce high quality films over large areas, 75mm x 25mm. During this mixed solvent approach, the film is kept in the wet state longer, thus reducing cracks developing during the drying stage. These films are crack-free up to a critical thickness, and show very large domains, which are visible in low magnification SEM images as Moiré fringe patterns. Higher magnification reveals separation between alternate fringe patterns are domain boundaries between individual crystalline growth fronts.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The landscape of late medieval Ireland, like most places in Europe, was characterized by intensified agricultural exploitation, the growth and founding of towns and cities and the construction of large stone edifices, such as castles and monasteries. None of these could have taken place without iron. Axes were needed for clearing woodland, ploughs for turning the soil, saws for wooden buildings and hammers and chisels for the stone ones, all of which could not realistically have been made from any other material. The many battles, waged with ever increasingly sophisticated weaponry, needed a steady supply of iron and steel. During the same period, the European iron industry itself underwent its most fundamental transformation since its inception; at the beginning of the period it was almost exclusively based on small furnaces producing solid blooms and by the turn of the seventeenth century it was largely based on liquid-iron production in blast-furnaces the size of a house. One of the great advantages of studying the archaeology of ironworking is that its main residue, slag, is often produced in copious amounts both during smelting and smithing, is virtually indestructible and has very little secondary use. This means that most sites where ironworking was carried out are readily recognizable as such by the occurrence of this slag. Moreover, visual examination can distinguish between various types of slag, which are often characteristic for the activity from which they derive. The ubiquity of ironworking in the period under study further means that we have large amounts of residues available for study, allowing us to distinguish patterns both inside assemblages and between sites. Disadvantages of the nature of the remains related to ironworking include the poor preservation of the installations used, especially the furnaces, which were often built out of clay and located above ground. Added to this are the many parameters contributing to the formation of the above-mentioned slag, making its composition difficult to connect to a certain technology or activity. Ironworking technology in late medieval Ireland has thus far not been studied in detail. Much of the archaeological literature on the subject is still tainted by the erroneous attribution of the main type of slag, bun-shaped cakes, to smelting activities. The large-scale infrastructure works of the first decade of the twenty-first century have led to an exponential increase in the amount of sites available for study. At the same time, much of the material related to metalworking recovered during these boom-years was subjected to specialist analysis. This has led to a near-complete overhaul of our knowledge of early ironworking in Ireland. Although many of these new insights are quickly seeping into the general literature, no concise overviews on the current understanding of the early Irish ironworking technology have been published to date. The above then presented a unique opportunity to apply these new insights to the extensive body of archaeological data we now possess. The resulting archaeological information was supplemented with, and compared to, that contained in the historical sources relating to Ireland for the same period. This added insights into aspects of the industry often difficult to grasp solely through the archaeological sources, such as the people involved and the trade in iron. Additionally, overviews on several other topics, such as a new distribution map of Irish iron ores and a first analysis of the information on iron smelting and smithing in late medieval western Europe, were compiled to allow this new knowledge on late medieval Irish ironworking to be put into a wider context. Contrary to current views, it appears that it is not smelting technology which differentiates Irish ironworking from the rest of Europe in the late medieval period, but its smithing technology and organisation. The Irish iron-smelting furnaces are generally of the slag-tapping variety, like their other European counterparts. Smithing, on the other hand, is carried out at ground-level until at least the sixteenth century in Ireland, whereas waist-level hearths become the norm further afield from the fourteenth century onwards. Ceramic tuyeres continue to be used as bellows protectors, whereas these are unknown elsewhere on the continent. Moreover, the lack of market centres at different times in late medieval Ireland, led to the appearance of isolated rural forges, a type of site unencountered in other European countries during that period. When these market centres are present, they appear to be the settings where bloom smithing is carried out. In summary, the research below not only offered us the opportunity to give late medieval ironworking the place it deserves in the broader knowledge of Ireland's past, but it also provided both a base for future research within the discipline, as well as a research model applicable to different time periods, geographical areas and, perhaps, different industries..

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis is centred on two experimental fields of optical micro- and nanofibre research; higher mode generation/excitation and evanescent field optical manipulation. Standard, commercial, single-mode silica fibre is used throughout most of the experiments; this generally produces high-quality, single-mode, micro- or nanofibres when tapered in a flame-heated, pulling rig in the laboratory. Single mode fibre can also support higher transverse modes, when transmitting wavelengths below that of their defined single-mode regime cut-off. To investigate this, a first-order Laguerre-Gaussian beam, LG01 of 1064 nm wavelength and doughnut-shaped intensity profile is generated free space via spatial light modulation. This technique facilitates coupling to the LP11 fibre mode in two-mode fibre, and convenient, fast switching to the fundamental mode via computer-generated hologram modulation. Following LP11 mode loss when exponentially tapering 125μm diameter fibre, two mode fibre with a cladding diameter of 80μm is selected fir testing since it is more suitable for satisfying the adiabatic criteria for fibre tapering. Proving a fruitful endeavour, experiments show a transmission of 55% of the original LP11 mode set (comprising TE01, TM01, HE21e,o true modes) in submicron fibres. Furthermore, by observing pulling dynamics and progressive mode-lass behaviour, it is possible to produce a nanofibre which supports only the TE01 and TM01 modes, while suppressing the HE21e,o elements of the LP11 group. This result provides a basis for experimental studies of atom trapping via mode-interference, and offers a new set of evanescent field geometries for sensing and particle manipulation applications. The thesis highlights the experimental results of the research unit’s Cold Atom subgroup, who successfully integrated one such higher-mode nanofibre into a cloud of cold Rubidium atoms. This led to the detection of stronger signals of resonance fluorescence coupling into the nanofibre and for light absorption by the atoms due to the presence of higher guided modes within the fibre. Theoretical work on the impact of the curved nanofibre surface on the atomic-surface van der Waals interaction is also presented, showing a clear deviation of the potential from the commonly-used flat-surface approximation. Optical micro- and nanofibres are also useful tools for evanescent-field mediated optical manipulation – this includes propulsion, defect-induced trapping, mass migration and size-sorting of micron-scale particles in dispersion. Similar early trapping experiments are described in this thesis, and resulting motivations for developing a targeted, site-specific particle induction method are given. The integration of optical nanofibres into an optical tweezers is presented, facilitating individual and group isolation of selected particles, and their controlled positioning and conveyance in the evanescent field. The effects of particle size and nanofibre diameter on pronounced scattering is experimentally investigated in this systems, as are optical binding effects between adjacent particles in the evanescent field. Such inter-particle interactions lead to regulated self-positioning and particle-chain speed enhancements.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In many real world situations, we make decisions in the presence of multiple, often conflicting and non-commensurate objectives. The process of optimizing systematically and simultaneously over a set of objective functions is known as multi-objective optimization. In multi-objective optimization, we have a (possibly exponentially large) set of decisions and each decision has a set of alternatives. Each alternative depends on the state of the world, and is evaluated with respect to a number of criteria. In this thesis, we consider the decision making problems in two scenarios. In the first scenario, the current state of the world, under which the decisions are to be made, is known in advance. In the second scenario, the current state of the world is unknown at the time of making decisions. For decision making under certainty, we consider the framework of multiobjective constraint optimization and focus on extending the algorithms to solve these models to the case where there are additional trade-offs. We focus especially on branch-and-bound algorithms that use a mini-buckets algorithm for generating the upper bound at each node of the search tree (in the context of maximizing values of objectives). Since the size of the guiding upper bound sets can become very large during the search, we introduce efficient methods for reducing these sets, yet still maintaining the upper bound property. We define a formalism for imprecise trade-offs, which allows the decision maker during the elicitation stage, to specify a preference for one multi-objective utility vector over another, and use such preferences to infer other preferences. The induced preference relation then is used to eliminate the dominated utility vectors during the computation. For testing the dominance between multi-objective utility vectors, we present three different approaches. The first is based on a linear programming approach, the second is by use of distance-based algorithm (which uses a measure of the distance between a point and a convex cone); the third approach makes use of a matrix multiplication, which results in much faster dominance checks with respect to the preference relation induced by the trade-offs. Furthermore, we show that our trade-offs approach, which is based on a preference inference technique, can also be given an alternative semantics based on the well known Multi-Attribute Utility Theory. Our comprehensive experimental results on common multi-objective constraint optimization benchmarks demonstrate that the proposed enhancements allow the algorithms to scale up to much larger problems than before. For decision making problems under uncertainty, we describe multi-objective influence diagrams, based on a set of p objectives, where utility values are vectors in Rp, and are typically only partially ordered. These can be solved by a variable elimination algorithm, leading to a set of maximal values of expected utility. If the Pareto ordering is used this set can often be prohibitively large. We consider approximate representations of the Pareto set based on ϵ-coverings, allowing much larger problems to be solved. In addition, we define a method for incorporating user trade-offs, which also greatly improves the efficiency.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Community-based management and the establishment of marine reserves have been advocated worldwide as means to overcome overexploitation of fisheries. Yet, researchers and managers are divided regarding the effectiveness of these measures. The "tragedy of the commons" model is often accepted as a universal paradigm, which assumes that unless managed by the State or privatized, common-pool resources are inevitably overexploited due to conflicts between the self-interest of individuals and the goals of a group as a whole. Under this paradigm, the emergence and maintenance of effective community-based efforts that include cooperative risky decisions as the establishment of marine reserves could not occur. In this paper, we question these assumptions and show that outcomes of commons dilemmas can be complex and scale-dependent. We studied the evolution and effectiveness of a community-based management effort to establish, monitor, and enforce a marine reserve network in the Gulf of California, Mexico. Our findings build on social and ecological research before (1997-2001), during (2002) and after (2003-2004) the establishment of marine reserves, which included participant observation in >100 fishing trips and meetings, interviews, as well as fishery dependent and independent monitoring. We found that locally crafted and enforced harvesting rules led to a rapid increase in resource abundance. Nevertheless, news about this increase spread quickly at a regional scale, resulting in poaching from outsiders and a subsequent rapid cascading effect on fishing resources and locally-designed rule compliance. We show that cooperation for management of common-pool fisheries, in which marine reserves form a core component of the system, can emerge, evolve rapidly, and be effective at a local scale even in recently organized fisheries. Stakeholder participation in monitoring, where there is a rapid feedback of the systems response, can play a key role in reinforcing cooperation. However, without cross-scale linkages with higher levels of governance, increase of local fishery stocks may attract outsiders who, if not restricted, will overharvest and threaten local governance. Fishers and fishing communities require incentives to maintain their management efforts. Rewarding local effective management with formal cross-scale governance recognition and support can generate these incentives.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A model of telescoping is proposed that assumes no systematic errors in dating. Rather, the overestimation of recent occurrences of events is based on the combination of three factors: (1) Retention is greater for recent events; (2) errors in dating, though unbiased, increase linearly with the time since the dated event; and (3) intrusions often occur from events outside the period being asked about, but such intrusions do not come from events that have not yet occurred. In Experiment 1, we found that recall for colloquia fell markedly over a 2-year interval, the magnitude of errors in psychologists' dating of the colloquia increased at a rate of .4 days per day of delay, and the direction of the dating error was toward the middle of the interval. In Experiment 2, the model used the retention function and dating errors from the first study to predict the distribution of the actual dates of colloquia recalled as being within a 5-month period. In Experiment 3, the findings of the first study were replicated with colloquia given by, instead of for, the subjects.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The word 'impromptu' began to appear in music literature in the early 19th century, specifically as title for a relatively short composition written for solo piano. The first impromptus appear to have been named so by the publishers. However, the composers themselves soon embraced the title to indicate, for the most part, fairly short character pieces. Impromptus do not follow any specific structural pattern, although many are cast in ternary form. The formal design ranges from strict compound ternary in the early impromptus to through-composed and variation forms. The peak of impromptu's popularity undoubtedly came during the middle and late19th century. However, they are still being composed today, albeit much less frequently. Although there have been many variants of impromptus in relation to formal design and harmonic language over the years, the essence of impromptu remains the same: it is still a short character piece with a general feeling of spontaneity. Overall, impromptus may be categorized into several different groups: some appear as part of a larger cycle, such as Dvorak's G minor Impromptu from his Piano Pieces, B. 110; many others use an element of an additional genre that enhances the character ofthe impromptu, such as Liszt's Valse-Impromptu and Antonio Bibalo's Tango Impromptu; yet another group consists of works based on opera themes, such as Liszt's Impromptu Brillant sur des themes de Rossini et Spontini and Czerny's Impromptus et variations sur Oberon, Op. 134. My recording project includes well-known impromptus, such as Schubert's Op. 142 and the four by Chopin, as well as lesser known works that have not been performed or recorded often. There are four impromptus that have been recorded here for the first time, including those written by Leopold Godowsky, Antonio Bibalo, Altin Volaj, and Nikolay Mazhara. I personally requested the two last named composers to contribute impromptus to this project. My selection represents works by twenty composers and reflects the different types of impromptus that have been encountered through almost three hundred years of the genre's existence, from approximately 1817 (VoriSek) to 2008 (Volaj and Mazhara).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

It is widely appreciated that larvae of the nematode Caenorhabditis elegans arrest development by forming dauer larvae in response to multiple unfavorable environmental conditions. C. elegans larvae can also reversibly arrest development earlier, during the first larval stage (L1), in response to starvation. "L1 arrest" (also known as "L1 diapause") occurs without morphological modification but is accompanied by increased stress resistance. Caloric restriction and periodic fasting can extend adult lifespan, and developmental models are critical to understanding how the animal is buffered from fluctuations in nutrient availability, impacting lifespan. L1 arrest provides an opportunity to study nutritional control of development. Given its relevance to aging, diabetes, obesity and cancer, interest in L1 arrest is increasing, and signaling pathways and gene regulatory mechanisms controlling arrest and recovery have been characterized. Insulin-like signaling is a critical regulator, and it is modified by and acts through microRNAs. DAF-18/PTEN, AMP-activated kinase and fatty acid biosynthesis are also involved. The nervous system, epidermis, and intestine contribute systemically to regulation of arrest, but cell-autonomous signaling likely contributes to regulation in the germline. A relatively small number of genes affecting starvation survival during L1 arrest are known, and many of them also affect adult lifespan, reflecting a common genetic basis ripe for exploration. mRNA expression is well characterized during arrest, recovery, and normal L1 development, providing a metazoan model for nutritional control of gene expression. In particular, post-recruitment regulation of RNA polymerase II is under nutritional control, potentially contributing to a rapid and coordinated response to feeding. The phenomenology of L1 arrest will be reviewed, as well as regulation of developmental arrest and starvation survival by various signaling pathways and gene regulatory mechanisms.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Modulatory descending neurons (DNs) that link the brain to body motor circuits, including dopaminergic DNs (DA-DNs), are thought to contribute to the flexible control of behavior. Dopamine elicits locomotor-like outputs and influences neuronal excitability in isolated body motor circuits over tens of seconds to minutes, but it remains unknown how and over what time scale DA-DN activity relates to movement in behaving animals. To address this question, we identified DA-DNs in the Drosophila brain and developed an electrophysiological preparation to record and manipulate the activity of these cells during behavior. We find that DA-DN spike rates are rapidly modulated during a subset of leg movements and scale with the total speed of ongoing leg movements, whether occurring spontaneously or in response to stimuli. However, activating DA-DNs does not elicit leg movements in intact flies, nor do acute bidirectional manipulations of DA-DN activity affect the probability or speed of leg movements over a time scale of seconds to minutes. Our findings indicate that in the context of intact descending control, changes in DA-DN activity are not sufficient to influence ongoing leg movements and open the door to studies investigating how these cells interact with other descending and local neuromodulatory inputs to influence body motor output.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The outcomes for both (i) radiation therapy and (ii) preclinical small animal radio- biology studies are dependent on the delivery of a known quantity of radiation to a specific and intentional location. Adverse effects can result from these procedures if the dose to the target is too high or low, and can also result from an incorrect spatial distribution in which nearby normal healthy tissue can be undesirably damaged by poor radiation delivery techniques. Thus, in mice and humans alike, the spatial dose distributions from radiation sources should be well characterized in terms of the absolute dose quantity, and with pin-point accuracy. When dealing with the steep spatial dose gradients consequential to either (i) high dose rate (HDR) brachytherapy or (ii) within the small organs and tissue inhomogeneities of mice, obtaining accurate and highly precise dose results can be very challenging, considering commercially available radiation detection tools, such as ion chambers, are often too large for in-vivo use.

In this dissertation two tools are developed and applied for both clinical and preclinical radiation measurement. The first tool is a novel radiation detector for acquiring physical measurements, fabricated from an inorganic nano-crystalline scintillator that has been fixed on an optical fiber terminus. This dosimeter allows for the measurement of point doses to sub-millimeter resolution, and has the ability to be placed in-vivo in humans and small animals. Real-time data is displayed to the user to provide instant quality assurance and dose-rate information. The second tool utilizes an open source Monte Carlo particle transport code, and was applied for small animal dosimetry studies to calculate organ doses and recommend new techniques of dose prescription in mice, as well as to characterize dose to the murine bone marrow compartment with micron-scale resolution.

Hardware design changes were implemented to reduce the overall fiber diameter to <0.9 mm for the nano-crystalline scintillator based fiber optic detector (NanoFOD) system. Lower limits of device sensitivity were found to be approximately 0.05 cGy/s. Herein, this detector was demonstrated to perform quality assurance of clinical 192Ir HDR brachytherapy procedures, providing comparable dose measurements as thermo-luminescent dosimeters and accuracy within 20% of the treatment planning software (TPS) for 27 treatments conducted, with an inter-quartile range ratio to the TPS dose value of (1.02-0.94=0.08). After removing contaminant signals (Cerenkov and diode background), calibration of the detector enabled accurate dose measurements for vaginal applicator brachytherapy procedures. For 192Ir use, energy response changed by a factor of 2.25 over the SDD values of 3 to 9 cm; however a cap made of 0.2 mm thickness silver reduced energy dependence to a factor of 1.25 over the same SDD range, but had the consequence of reducing overall sensitivity by 33%.

For preclinical measurements, dose accuracy of the NanoFOD was within 1.3% of MOSFET measured dose values in a cylindrical mouse phantom at 225 kV for x-ray irradiation at angles of 0, 90, 180, and 270˝. The NanoFOD exhibited small changes in angular sensitivity, with a coefficient of variation (COV) of 3.6% at 120 kV and 1% at 225 kV. When the NanoFOD was placed alongside a MOSFET in the liver of a sacrificed mouse and treatment was delivered at 225 kV with 0.3 mm Cu filter, the dose difference was only 1.09% with use of the 4x4 cm collimator, and -0.03% with no collimation. Additionally, the NanoFOD utilized a scintillator of 11 µm thickness to measure small x-ray fields for microbeam radiation therapy (MRT) applications, and achieved 2.7% dose accuracy of the microbeam peak in comparison to radiochromic film. Modest differences between the full-width at half maximum measured lateral dimension of the MRT system were observed between the NanoFOD (420 µm) and radiochromic film (320 µm), but these differences have been explained mostly as an artifact due to the geometry used and volumetric effects in the scintillator material. Characterization of the energy dependence for the yttrium-oxide based scintillator material was performed in the range of 40-320 kV (2 mm Al filtration), and the maximum device sensitivity was achieved at 100 kV. Tissue maximum ratio data measurements were carried out on a small animal x-ray irradiator system at 320 kV and demonstrated an average difference of 0.9% as compared to a MOSFET dosimeter in the range of 2.5 to 33 cm depth in tissue equivalent plastic blocks. Irradiation of the NanoFOD fiber and scintillator material on a 137Cs gamma irradiator to 1600 Gy did not produce any measurable change in light output, suggesting that the NanoFOD system may be re-used without the need for replacement or recalibration over its lifetime.

For small animal irradiator systems, researchers can deliver a given dose to a target organ by controlling exposure time. Currently, researchers calculate this exposure time by dividing the total dose that they wish to deliver by a single provided dose rate value. This method is independent of the target organ. Studies conducted here used Monte Carlo particle transport codes to justify a new method of dose prescription in mice, that considers organ specific doses. Monte Carlo simulations were performed in the Geant4 Application for Tomographic Emission (GATE) toolkit using a MOBY mouse whole-body phantom. The non-homogeneous phantom was comprised of 256x256x800 voxels of size 0.145x0.145x0.145 mm3. Differences of up to 20-30% in dose to soft-tissue target organs was demonstrated, and methods for alleviating these errors were suggested during whole body radiation of mice by utilizing organ specific and x-ray tube filter specific dose rates for all irradiations.

Monte Carlo analysis was used on 1 µm resolution CT images of a mouse femur and a mouse vertebra to calculate the dose gradients within the bone marrow (BM) compartment of mice based on different radiation beam qualities relevant to x-ray and isotope type irradiators. Results and findings indicated that soft x-ray beams (160 kV at 0.62 mm Cu HVL and 320 kV at 1 mm Cu HVL) lead to substantially higher dose to BM within close proximity to mineral bone (within about 60 µm) as compared to hard x-ray beams (320 kV at 4 mm Cu HVL) and isotope based gamma irradiators (137Cs). The average dose increases to the BM in the vertebra for these four aforementioned radiation beam qualities were found to be 31%, 17%, 8%, and 1%, respectively. Both in-vitro and in-vivo experimental studies confirmed these simulation results, demonstrating that the 320 kV, 1 mm Cu HVL beam caused statistically significant increased killing to the BM cells at 6 Gy dose levels in comparison to both the 320 kV, 4 mm Cu HVL and the 662 keV, 137Cs beams.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The paper considers the open shop scheduling problem to minimize the make-span, provided that one of the machines has to process the jobs according to a given sequence. We show that in the preemptive case the problem is polynomially solvable for an arbitrary number of machines. If preemption is not allowed, the problem is NP-hard in the strong sense if the number of machines is variable, and is NP-hard in the ordinary sense in the case of two machines. For the latter case we give a heuristic algorithm that runs in linear time and produces a schedule with the makespan that is at most 5/4 times the optimal value. We also show that the two-machine problem in the nonpreemptive case is solvable in pseudopolynomial time by a dynamic programming algorithm, and that the algorithm can be converted into a fully polynomial approximation scheme. © 1998 John Wiley & Sons, Inc. Naval Research Logistics 45: 705–731, 1998

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A review of the atomistic modelling of the behaviour of nano-scale structures and processes via molecular dynamics (MD) simulation method of a canonical ensemble is presented. Three areas of application in condensed matter physics are considered. We focus on the adhesive and indentation properties of the solid surfaces in nano-contacts, the nucleation and growth of nano-phase metallic and semi-conducting atomic and molecular films on supporting substrates, and the nano- and multi-scale crack propagation properties of metallic lattices. A set of simulations selected from these fields are discussed, together with a brief introduction to the methodology of the MD simulation. The pertinent inter-atomic potentials that model the energetics of the metallic and semi-conducting systems are also given.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Measurements of suspended particle matter (SPM) and turbulence have been obtained over five tidal surveys during spring and summer 2010 at station L4 (5025 degrees N 04.22 degrees W, depth 50 m), in the Western English Channel. The relationship between turbulence intensity and bed stress is explored, with an in-line holographic imaging system evaluating the extent to which material is resuspended. Image analysis allows for the identification of SPM above a size threshold of 200 pm, capturing particle variability across tidal cycles and the two seasons. Dissipation of turbulent kinetic energy, which exceeds 10(-5) W kg(-1), yields maximum values of bed stress of between 0.17 and 0.20 N m(-2), frequently resulting in the resuspension of material from the bed. Resuspension is shown to promote aggregation of SPM into flocs, where the size of such particles is theoretically determined by the Kolmogorov microscale, l(k). During the spring surveys, flocs of a size larger than lk were observed, though this was not repeated during summer. It is proposed that the presence of gelatinous, biological material in spring allows flocculated particles to exceed l(k). This suggests that under specific circumstances, the limiting factor on the growth of flocculated SPM is not only turbulence, as previously thought, but the presence or absence of certain types of biological particle.