960 resultados para material outgassing rate


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The ever-increasing demand for faster computers in various areas, ranging from entertaining electronics to computational science, is pushing the semiconductor industry towards its limits on decreasing the sizes of electronic devices based on conventional materials. According to the famous law by Gordon E. Moore, a co-founder of the world s largest semiconductor company Intel, the transistor sizes should decrease to the atomic level during the next few decades to maintain the present rate of increase in the computational power. As leakage currents become a problem for traditional silicon-based devices already at sizes in the nanometer scale, an approach other than further miniaturization is needed to accomplish the needs of the future electronics. A relatively recently proposed possibility for further progress in electronics is to replace silicon with carbon, another element from the same group in the periodic table. Carbon is an especially interesting material for nanometer-sized devices because it forms naturally different nanostructures. Furthermore, some of these structures have unique properties. The most widely suggested allotrope of carbon to be used for electronics is a tubular molecule having an atomic structure resembling that of graphite. These carbon nanotubes are popular both among scientists and in industry because of a wide list of exciting properties. For example, carbon nanotubes are electronically unique and have uncommonly high strength versus mass ratio, which have resulted in a multitude of proposed applications in several fields. In fact, due to some remaining difficulties regarding large-scale production of nanotube-based electronic devices, fields other than electronics have been faster to develop profitable nanotube applications. In this thesis, the possibility of using low-energy ion irradiation to ease the route towards nanotube applications is studied through atomistic simulations on different levels of theory. Specifically, molecular dynamic simulations with analytical interaction models are used to follow the irradiation process of nanotubes to introduce different impurity atoms into these structures, in order to gain control on their electronic character. Ion irradiation is shown to be a very efficient method to replace carbon atoms with boron or nitrogen impurities in single-walled nanotubes. Furthermore, potassium irradiation of multi-walled and fullerene-filled nanotubes is demonstrated to result in small potassium clusters in the hollow parts of these structures. Molecular dynamic simulations are further used to give an example on using irradiation to improve contacts between a nanotube and a silicon substrate. Methods based on the density-functional theory are used to gain insight on the defect structures inevitably created during the irradiation. Finally, a new simulation code utilizing the kinetic Monte Carlo method is introduced to follow the time evolution of irradiation-induced defects on carbon nanotubes on macroscopic time scales. Overall, the molecular dynamic simulations presented in this thesis show that ion irradiation is a promisingmethod for tailoring the nanotube properties in a controlled manner. The calculations made with density-functional-theory based methods indicate that it is energetically favorable for even relatively large defects to transform to keep the atomic configuration as close to the pristine nanotube as possible. The kinetic Monte Carlo studies reveal that elevated temperatures during the processing enhance the self-healing of nanotubes significantly, ensuring low defect concentrations after the treatment with energetic ions. Thereby, nanotubes can retain their desired properties also after the irradiation. Throughout the thesis, atomistic simulations combining different levels of theory are demonstrated to be an important tool for determining the optimal conditions for irradiation experiments, because the atomic-scale processes at short time scales are extremely difficult to study by any other means.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Fusion energy is a clean and safe solution for the intricate question of how to produce non-polluting and sustainable energy for the constantly growing population. The fusion process does not result in any harmful waste or green-house gases, since small amounts of helium is the only bi-product that is produced when using the hydrogen isotopes deuterium and tritium as fuel. Moreover, deuterium is abundant in seawater and tritium can be bred from lithium, a common metal in the Earth's crust, rendering the fuel reservoirs practically bottomless. Due to its enormous mass, the Sun has been able to utilize fusion as its main energy source ever since it was born. But here on Earth, we must find other means to achieve the same. Inertial fusion involving powerful lasers and thermonuclear fusion employing extreme temperatures are examples of successful methods. However, these have yet to produce more energy than they consume. In thermonuclear fusion, the fuel is held inside a tokamak, which is a doughnut-shaped chamber with strong magnets wrapped around it. Once the fuel is heated up, it is controlled with the help of these magnets, since the required temperatures (over 100 million degrees C) will separate the electrons from the nuclei, forming a plasma. Once the fusion reactions occur, excess binding energy is released as energetic neutrons, which are absorbed in water in order to produce steam that runs turbines. Keeping the power losses from the plasma low, thus allowing for a high number of reactions, is a challenge. Another challenge is related to the reactor materials, since the confinement of the plasma particles is not perfect, resulting in particle bombardment of the reactor walls and structures. Material erosion and activation as well as plasma contamination are expected. Adding to this, the high energy neutrons will cause radiation damage in the materials, causing, for instance, swelling and embrittlement. In this thesis, the behaviour of a material situated in a fusion reactor was studied using molecular dynamics simulations. Simulations of processes in the next generation fusion reactor ITER include the reactor materials beryllium, carbon and tungsten as well as the plasma hydrogen isotopes. This means that interaction models, {\it i.e. interatomic potentials}, for this complicated quaternary system are needed. The task of finding such potentials is nonetheless nearly at its end, since models for the beryllium-carbon-hydrogen interactions were constructed in this thesis and as a continuation of that work, a beryllium-tungsten model is under development. These potentials are combinable with the earlier tungsten-carbon-hydrogen ones. The potentials were used to explain the chemical sputtering of beryllium due to deuterium plasma exposure. During experiments, a large fraction of the sputtered beryllium atoms were observed to be released as BeD molecules, and the simulations identified the swift chemical sputtering mechanism, previously not believed to be important in metals, as the underlying mechanism. Radiation damage in the reactor structural materials vanadium, iron and iron chromium, as well as in the wall material tungsten and the mixed alloy tungsten carbide, was also studied in this thesis. Interatomic potentials for vanadium, tungsten and iron were modified to be better suited for simulating collision cascades that are formed during particle irradiation, and the potential features affecting the resulting primary damage were identified. Including the often neglected electronic effects in the simulations was also shown to have an impact on the damage. With proper tuning of the electron-phonon interaction strength, experimentally measured quantities related to ion-beam mixing in iron could be reproduced. The damage in tungsten carbide alloys showed elemental asymmetry, as the major part of the damage consisted of carbon defects. On the other hand, modelling the damage in the iron chromium alloy, essentially representing steel, showed that small additions of chromium do not noticeably affect the primary damage in iron. Since a complete assessment of the response of a material in a future full-scale fusion reactor is not achievable using only experimental techniques, molecular dynamics simulations are of vital help. This thesis has not only provided insight into complicated reactor processes and improved current methods, but also offered tools for further simulations. It is therefore an important step towards making fusion energy more than a future goal.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Atmospheric aerosol particles affect the global climate as well as human health. In this thesis, formation of nanometer sized atmospheric aerosol particles and their subsequent growth was observed to occur all around the world. Typical formation rate of 3 nm particles at varied from 0.01 to 10 cm-3s-1. One order of magnitude higher formation rates were detected in urban environment. Highest formation rates up to 105 cm-3s-1 were detected in coastal areas and in industrial pollution plumes. Subsequent growth rates varied from 0.01 to 20 nm h-1. Smallest growth rates were observed in polar areas and the largest in the polluted urban environment. This was probably due to competition between growth by condensation and loss by coagulation. Observed growth rates were used in the calculation of a proxy condensable vapour concentration and its source rate in vastly different environments from pristine Antarctica to polluted India. Estimated concentrations varied only 2 orders of magnitude, but the source rates for the vapours varied up to 4 orders of magnitude. Highest source rates were in New Delhi and lowest were in the Antarctica. Indirect methods were applied to study the growth of freshly formed particles in the atmosphere. Also a newly developed Water Condensation Particle Counter, TSI 3785, was found to be a potential candidate to detect water solubility and thus indirectly composition of atmospheric ultra-fine particles. Based on indirect methods, the relative roles of sulphuric acid, non-volatile material and coagulation were investigated in rural Melpitz, Germany. Condensation of non-volatile material explained 20-40% and sulphuric acid the most of the remaining growth up to a point, when nucleation mode reached 10 to 20 nm in diameter. Coagulation contributed typically less than 5%. Furthermore, hygroscopicity measurements were applied to detect the contribution of water soluble and insoluble components in Athens. During more polluted days, the water soluble components contributed more to the growth. During less anthropogenic influence, non-soluble compounds explained a larger fraction of the growth. In addition, long range transport to a measurement station in Finland in a relatively polluted air mass was found to affect the hygroscopicity of the particles. This aging could have implications to cloud formation far away from the pollution sources.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Sequence design problems are considered in this paper. The problem of sum power minimization in a spread spectrum system can be reduced to the problem of sum capacity maximization, and vice versa. A solution to one of the problems yields a solution to the other. Subsequently, conceptually simple sequence design algorithms known to hold for the white-noise case are extended to the colored noise case. The algorithms yield an upper bound of 2N - L on the number of sequences where N is the processing gain and L the number of non-interfering subsets of users. If some users (at most N - 1) are allowed to signal along a limited number of multiple dimensions, then N orthogonal sequences suffice.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A Linear Processing Complex Orthogonal Design (LPCOD) is a p x n matrix epsilon, (p >= n) in k complex indeterminates x(1), x(2),..., x(k) such that (i) the entries of epsilon are complex linear combinations of 0, +/- x(i), i = 1,..., k and their conjugates, (ii) epsilon(H)epsilon = D, where epsilon(H) is the Hermitian (conjugate transpose) of epsilon and D is a diagonal matrix with the (i, i)-th diagonal element of the form l(1)((i))vertical bar x(1)vertical bar(2) + l(2)((i))vertical bar x(2)vertical bar(2)+...+ l(k)((i))vertical bar x(k)vertical bar(2) where l(j)((i)), i = 1, 2,..., n, j = 1, 2,...,k are strictly positive real numbers and the condition l(1)((i)) = l(2)((i)) = ... = l(k)((i)), called the equal-weights condition, holds for all values of i. For square designs it is known. that whenever a LPCOD exists without the equal-weights condition satisfied then there exists another LPCOD with identical parameters with l(1)((i)) = l(2)((i)) = ... = l(k)((i)) = 1. This implies that the maximum possible rate for square LPCODs without the equal-weights condition is the same as that or square LPCODs with equal-weights condition. In this paper, this result is extended to a subclass of non-square LPCODs. It is shown that, a set of sufficient conditions is identified such that whenever a non-square (p > n) LPCOD satisfies these sufficient conditions and do not satisfy the equal-weights condition, then there exists another LPCOD with the same parameters n, k and p in the same complex indeterminates with l(1)((i)) = l(2)((i)) = ... = l(k)((i)) = 1.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper deals with the development of simplified semi-empirical relations for the prediction of residual velocities of small calibre projectiles impacting on mild steel target plates, normally or at an angle, and the ballistic limits for such plates. It has been shown, for several impact cases for which test results on perforation of mild steel plates are available, that most of the existing semi-empirical relations which are applicable only to normal projectile impact do not yield satisfactory estimations of residual velocity. Furthermore, it is difficult to quantify some of the empirical parameters present in these relations for a given problem. With an eye towards simplicity and ease of use, two new regression-based relations employing standard material parameters have been discussed here for predicting residual velocity and ballistic limit for both normal and oblique impact. The latter expressions differ in terms of usage of quasi-static or strain rate-dependent average plate material strength. Residual velocities yielded by the present semi-empirical models compare well with the experimental results. Additionally, ballistic limits from these relations show close correlation with the corresponding finite element-based predictions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Wood is an important material for the construction and pulping industries. Using x-ray diffraction the microfibril angle of Sitka spruce wood was studied in the first part of this thesis. Sitka spruce (Picea sitchensis [Bong.] Carr.) is native to the west coast of North America, but due to its fast growth rate, it has also been imported to Europe. So far, its nanometre scale properties have not been systematically characterised. In this thesis the microfibril angle of Sitka spruce was shown to depend significantly on the origin of the tree in the first annual rings near the pith. Wood can be further processed to separate lignin from cellulose and hemicelluloses. Solid cellulose can act as a reducer for metal ions and it is also a porous support for nanoparticles. By chemically reducing nickel or copper in the solid cellulose support it is possible to get small nanoparticles on the surfaces of the cellulose fibres. Cellulose supported metal nanoparticles can potentially be used as environmentally friendly catalysts in organic chemistry reactions. In this thesis the size of the nickel and copper containing nanoparticles were studied using anomalous small-angle x-ray scattering and wide-angle x-ray scattering. The anomalous small-angle x-ray scattering experiments showed that the crystallite size of the copper oxide nanoparticles was the same as the size of the nanoparticles, so the nanoparticles were single crystals. The nickel containing nanoparticles were amorphous, but crystallised upon heating. The size of the nanoparticles was observed to be smaller when the reduction of nickel was done in aqueous ammonium hydrate medium compared to reduction made in aqueous solution. Lignin is typically seen as the side-product of wood industries. Lignin is the second most abundant natural polymer on Earth, and it possesses potential to be a useful material for many purposes in addition to being an energy source for the pulp mills. In this thesis, the morphology of several lignins, which were produced by different separation methods from wood, was studied using small-angle and ultra small-angle x-ray scattering. It was shown that the fractal model previously proposed for the lignin structure does not apply to most of the extracted lignin types. The only lignin to which the fractal model could be applied was kraft lignin. In aqueous solutions the average shape of the low molar mass kraft lignin particles was observed to be elongated and flat. The average shape does not necessarily correspond to the shape of the individual particles because of the polydispersity of the fraction and due to selfassociation of the particles. Lignins, and especially lignosulfonate, have many uses as dispersants, binders and emulsion stabilisers. In this thesis work the selfassociation of low molar mass lignosulfonate macromolecules was observed using small-angle x-ray scattering. By taking into account the polydispersity of the studied lignosulfonate fraction, the shape of the lignosulfonate particles was determined to be flat by fitting an oblate ellipsoidal model to the scattering intensity.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Stress- and strain-controlled tests of heat treated high-strength rail steel (Australian Standard AS1085.1) have been performed in order to improve the characterisation of the said material׳s ratcheting and fatigue wear behaviour. The hardness of the rail head material has also been studied and it has been found that hardness reduces considerably below four-millimetres from the rail top surface. Historically, researchers have used test coupons with circular cross-sections to conduct cyclic load tests. Such test coupons, typically five-millimetres in gauge diameter and ten‐millimetres in grip diameter, are usually taken from the rail head sample. When there is considerable variation of material properties over the cross-section it becomes likely that localised properties of the rail material will be missed. In another case from the literature, disks 47 mm in diameter for a twin-disk rolling contact test machine were obtained directly from the rail sample and used to validate ratcheting and rolling contact fatigue wear models. The question arises: How accurate are such tests, especially when large material property gradients exist? In this research paper, the effects of rail sampling location on the ratcheting behaviour of AS1085.1 rail steel were investigated using rectangular-shaped specimens obtained at four different depths to observe their respective cyclic plasticity behaviour. The microstructural features of the test coupons were also analysed, especially the pearlite inter-lamellar spacing which showed strong correlation with both hardness and cyclic plasticity behaviour of the material. This work ultimately provides new data and testing methodology to aid the selection of valid parameters for material constitutive models to better understand rail surface ratcheting and wear.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

It is known that by employing space-time-frequency codes (STFCs) to frequency selective MIMO-OFDM systems, all the three diversity viz spatial, temporal and multipath can be exploited. There exists space-time-frequency block codes (STFBCs) designed using orthogonal designs with constellation precoder to get full diversity (Z.Liu, Y.Xin and G.Giannakis IEEE Trans. Signal Processing, Oct. 2002). Since orthogonal designs of rate one exists only for two transmit antennas, for more than two transmit antennas STFBCs of rate-one and full-diversity cannot be constructed using orthogonal designs. This paper presents a STFBC scheme of rate one for four transmit antennas designed using quasi-orthogonal designs along with co-ordinate interleaved orthogonal designs (Zafar Ali Khan and B. Sundar Rajan Proc: ISIT 2002). Conditions on the signal sets that give full-diversity are identified. Simulation results are presented to show the superiority of our codes over the existing ones.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents a systematic construction of high-rate and full-diversity space-frequency block codes for MIMO-OFDM systems. While all prior constructions offer only a maximum rate of one complex symbol per channel use, our construction yields rate equal to the number of transmit antennas and simultaneously achieves full-diversity. The proposed construction works for arbitrary number of transmit antennas and arbitrary channel power delay profile. A key step in this construction is the generalization of the stacked matrix code design criteria given by Bolcskei et.al., (IEEE WCNC 2000). Explicit equivalence of our generalized code design criteria with the Hadamard-product based criteria of W. Su et.al., (lEEE Trans. Sig. Proc. Nov 2003) is established and new high-rate codes are constructed using our criteria.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The problem of constructing space-time (ST) block codes over a fixed, desired signal constellation is considered. In this situation, there is a tradeoff between the transmission rate as measured in constellation symbols per channel use and the transmit diversity gain achieved by the code. The transmit diversity is a measure of the rate of polynomial decay of pairwise error probability of the code with increase in the signal-to-noise ratio (SNR). In the setting of a quasi-static channel model, let n(t) denote the number of transmit antennas and T the block interval. For any n(t) <= T, a unified construction of (n(t) x T) ST codes is provided here, for a class of signal constellations that includes the familiar pulse-amplitude (PAM), quadrature-amplitude (QAM), and 2(K)-ary phase-shift-keying (PSK) modulations as special cases. The construction is optimal as measured by the rate-diversity tradeoff and can achieve any given integer point on the rate-diversity tradeoff curve. An estimate of the coding gain realized is given. Other results presented here include i) an extension of the optimal unified construction to the multiple fading block case, ii) a version of the optimal unified construction in which the underlying binary block codes are replaced by trellis codes, iii) the providing of a linear dispersion form for the underlying binary block codes, iv) a Gray-mapped version of the unified construction, and v) a generalization of construction of the S-ary case corresponding to constellations of size S-K. Items ii) and iii) are aimed at simplifying the decoding of this class of ST codes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

There is a relative absence of sociological and cultural research on how people deal with the death of a family member in the contemporary western societies. Research on this topic has been dominated by the experts of psychology, psychiatry and therapy, who mention the social context only in passing, if at all. This gives an impression that the white westerners bereavement experience is a purely psychological phenomenon, an inner journey, which follows a natural, universal path. Yet, as Tony Walter (1999) states, ignoring the influence of culture not only impoverishes the understanding of those work with bereaved people, but it also impoverishes sociology and cultural studies by excluding from their domain a key social phenomenon. This study explores the cultural dimension of grief through narratives told by fifteen of recently bereaved Finnish women. Focussing on one sex only, the study rests on the assumption of the gendered nature of bereavement experience. However, the aim of the study is not to pinpoint the gender differences in grief and mourning, but to shed light on women s ways of dealing with the loss of a loved one in a social context. Furthermore, the study focuses on a certain kind of loss: the death of an elderly parent. Due to the growth in the life expectancy rate, this has presumably become the most typical type of bereavement in contemporary, ageing societies. Most of population will face the death of a parent as they reach the middle years of the life course. The data of this study is gathered with interviews, in which the interviewees were invited to tell a narrative of their bereavement. Narrative constitutes a central concept in this study. It refers to a particular form of talk, which is organised around consequential events. But there are also other, deeper layers that have been added to this concept. Several scholars see narratives as the most important way in which we make sense of experience. Personal narratives provide rich material for mapping the interconnections between individual and culture. As a form of thought, narrative marries singular circumstances with shared expectations and understandings that are learned through participation in a specific culture (Garro & Mattingly 2000). This study attempts to capture the cultural dimension of narrative with the concept of script , which originates in cognitive science (Schank & Abelson 1977) and has recently been adopted to narratology (Herman 2002). Script refers to a data structure that informs how events usually unfold in certain situations. Scripts are used in interpreting events and representing them verbally to others. They are based on dominant forms of knowledge that vary according to time and place. The questions that were posed in this study are the following. What kind of experiences bereaved daughters narrate? What kind of cultural scripts they employ as they attempt to make sense of these experiences? How these scripts are used in their narratives? It became apparent that for the most of the daughters interviewed in this study the single most important part of the bereavement narrative was to form an account of how and why the parent died. They produced lengthy and detailed descriptions of the last stage of a parent s life in contrast with the rest of the interview. These stories took their start from a turn in the parent s physical condition, from which the dying process could in retrospect be seen to have started, and which often took place several years before the death. In addition, daughters also talked about their grief reactions and how they have adjusted to a life without the deceased parent. The ways in which the last stage of life was told reflect not only the characteristic features of late modernity but also processes of marginalisation and exclusion. Revivalist script and medical script, identified by Clive Seale as the dominant, competing models for dying well in the late modern societies, were not widely utilised in the narratives. They could only be applied in situations in which the parent had died from cancer and at somewhat younger age than the average. Death that took place in deep old age was told in a different way. The lack of positive models for narrating this kind of death was acknowledged in the study. This can be seen as a symptom of the societal devaluing of the deaths of older people and it affects also daughters accounts of their grief. Several daughters told about situations in which their loss, although subjectively experienced, was nonetheless denied by other people.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Sequence design and resource allocation for a symbol-asynchronous chip-synchronous code division multiple access (CDMA) system is considered in this paper. A simple lower bound on the minimum sum-power required for a non-oversized system, based on the best achievable for a non-spread system, and an analogous upper bound on the sum rate are first summarised. Subsequently, an algorithm of Sundaresan and Padakandla is shown to achieve the lower bound on minimum sum power (upper bound on sum rate, respectively). Analogous to the synchronous case, by splitting oversized users in a system with processing gain N, a system with no oversized users is easily obtained, and the lower bound on sum power (upper bound on sum rate, respectively) is shown to be achieved by using N orthogonal sequences. The total number of splits is at most N - 1.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This research is connected with an education development project for the four-year-long officer education program at the National Defence University. In this curriculum physics was studied in two alternative course plans namely scientific and general. Observations connected to the later one e.g. student feedback and learning outcome gave indications that action was needed to support the course. The reform work was focused on the production of aligned course related instructional material. The learning material project produced a customized textbook set for the students of the general basic physics course. The research adapts phases that are typical in Design Based Research (DBR). The research analyses the feature requirements for physics textbook aimed at a specific sector and frames supporting instructional material development, and summarizes the experiences gained in the learning material project when the selected frames have been applied. The quality of instructional material is an essential part of qualified teaching. The goal of instructional material customization is to increase the product's customer centric nature and to enhance its function as a support media for the learning process. Textbooks are still one of the core elements in physics teaching. The idea of a textbook will remain but the form and appearance may change according to the prevailing technology. The work deals with substance connected frames (demands of a physics textbook according to the PER-viewpoint, quality thinking in educational material development), frames of university pedagogy and instructional material production processes. A wide knowledge and understanding of different frames are useful in development work, if they are to be utilized to aid inspiration without limiting new reasoning and new kinds of models. Applying customization even in the frame utilization supports creative and situation aware design and diminishes the gap between theory and practice. Generally, physics teachers produce their own supplementary instructional material. Even though customization thinking is not unknown the threshold to produce an entire textbook might be high. Even though the observations here are from the general physics course at the NDU, the research gives tools also for development in other discipline related educational contexts. This research is an example of an instructional material development work together the questions it uncovers, and presents thoughts when textbook customization is rewarding. At the same time, the research aims to further creative customization thinking in instruction and development. Key words: Physics textbook, PER (Physics Education Research), Instructional quality, Customization, Creativity