942 resultados para Sun: spicules
Resumo:
In this article, a novel pressureless solid-liquid reaction method is presented for preparation of yttrium disilicate (γ-Y2Si2O7). Single-phase γ-Y2Si2O7 powder was synthesized by calcination of SiO2 and Y2O3 powders with the addition of LiYO2 at 1400 °C for 4 h. The addition of LiYO2 significantly decreased the synthesis temperature, shortened the calcination time, and enhanced the stability of γ-Y2Si2O7. The sintering of these powders in air and O2 was studied by means of thermal mechanical analyzer. It is shown that the γ-Y2Si2O7 sintered in oxygen had a faster densification rate and a higher density than that sintered in air. Furthermore, single-phase γ-Y2Si2O7 with a density of 4.0 g/cm3 (99% of the theoretical density) was obtained by pressureless sintering at 1400 °C for 2 h in oxygen. Microstructures of the sintered samples are studied by scanning electron microscope.
Resumo:
The cyclic-oxidation behavior of Ti3SiC2-base material was studied at 1100°C in air. Scale spallation and weight loss were not observed in the present tests and the weight gain would just continue if the experiments were not interrupted. The present results demonstrated that the scale growth on Ti3SiC2-base material obeyed a parabolic rate law up to 20 cycles. It then changed to a linear rate with further increasing cycles. The scales formed on the Ti3SiC2base material were composed of an inward-growing, fine-grain mixture of Ti022 + SiO2 and an outward-growing, coarse-grain TiO2. Theoretical calculations show that the mismatch in thermal expansion coefficients between the inner scale and Ti3SiC2-base matrix is small. The outer TiO2 layer was under very low compressive stress, while the inner TiO2 + SiO2 layer was under tensile stress during cooling. Scale spaliation is, therefore, not expected and the scale formed on Ti3SiC2-base material is adherent and resistant to cyclic oxidation.
Resumo:
4-Bromomethylcoumarins (1) reacted with sodium azide in aqueous acetone to give 4-azidomethyl-coumarins (2), which underwent 1,3-dipolar cycloaddition with acetylenic dipolarophiles to give triazoles (3). These triazoles (3) have been found to exhibit interesting variations in the chemical shifts of C-3-H and C-4-methylene protons. Protonation studies indicate that the shielding effect of the C-3-H of coumarin is due to pi-electrons of the triazole ring, further supported by diffraction and computational studies.
Resumo:
The cyclically varying magnetic field of the Sun is believed to be produced by the hydromagnetic dynamo process. We first summarize the relevant observational data pertaining to sunspots and solar cycle. Then we review the basic principles of MHD needed to develop the dynamo theory. This is followed by a discussion how bipolar sunspots form due to magnetic buoyancy of flux tubes formed at the base of the solar convection zone. Following this, we come to the heart of dynamo theory. After summarizing the basic ideas of a turbulent dynamo and the basic principles of its mean field formulation, we present the famous dynamo wave solution, which was supposed to provide a model for the solar cycle. Finally we point out how a flux transport dynamo can circumvent some of the difficulties associated with the older dynamo models.
Resumo:
Accurate characterization and reporting of organic photovoltaic (OPV) device performance remains one of the important challenges in the field. The large spread among the efficiencies of devices with the same structure reported by different groups is significantly caused by different procedures and equipment used during testing. The presented article addresses this issue by offering a new method of device testing using “suitcase sample” approach combined with outdoor testing that limits the diversity of the equipment, and a strict measurement protocol. A round robin outdoor characterization of roll-to-roll coated OPV cells and modules conducted among 46 laboratories worldwide is presented, where the samples and the testing equipment were integrated in a compact suitcase that served both as a sample transportation tool and as a holder and test equipment during testing. In addition, an internet based coordination was used via plasticphotovoltaics.org that allowed fast and efficient communication among participants and provided a controlled reporting format for the results that eased the analysis of the data. The reported deviations among the laboratories were limited to 5% when compared to the Si reference device integrated in the suitcase and were up to 8% when calculated using the local irradiance data. Therefore, this method offers a fast, cheap and efficient tool for sample sharing and testing that allows conducting outdoor measurements of OPV devices in a reproducible manner.
Resumo:
Carbon nanotubes (CNTs) have emerged as promising candidates for biomedical x-ray devices and other applications of field emission. CNTs grown/deposited in a thin film are used as cathodes for field emission. In spite of the good performance of such cathodes, the procedure to estimate the device current is not straightforward and the required insight towards design optimization is not well developed. In this paper, we report an analysis aided by a computational model and experiments by which the process of evolution and self-assembly (reorientation) of CNTs is characterized and the device current is estimated. The modeling approach involves two steps: (i) a phenomenological description of the degradation and fragmentation of CNTs and (ii) a mechanics based modeling of electromechanical interaction among CNTs during field emission. A computational scheme is developed by which the states of CNTs are updated in a time incremental manner. Finally, the device current is obtained by using the Fowler–Nordheim equation for field emission and by integrating the current density over computational cells. A detailed analysis of the results reveals the deflected shapes of the CNTs in an ensemble and the extent to which the initial state of geometry and orientation angles affect the device current. Experimental results confirm these effects.
Resumo:
Lateral collisions between heavy road vehicles and passenger trains at level crossings and the associated derailments are serious safety issues. This paper presents a detailed investigation of the dynamic responses and derailment mechanisms of trains under lateral impact using a multi-body dynamics simulation method. Formulation of a three-dimensional dynamic model of a passenger train running on a ballasted track subject to lateral impact caused by a road truck is presented. This model is shown to predict derailment due to wheel climb and car body overturning mechanisms through numerical examples. Sensitivities of the truck speed and mass, wheel/rail friction and the train suspension to the lateral stability and derailment of the train are reported. It is shown that improvements to the design of train suspensions, including secondary and inter-vehicle lateral dampers have higher potential to mitigate the severity of the collision-induced derailments.
Resumo:
Free software is viewed as a revolutionary and subversive practice, and in particular has dealt a strong blow to the traditional conception of intellectual property law (although in its current form could be considered a 'hack' of IP rights). However, other (capitalist) areas of law have been swift to embrace free software, or at least incorporate it into its own tenets. One area in particular is that of competition (antitrust) law, which itself has long been in theoretical conflict with intellectual property, due to the restriction on competition inherent in the grant of ‘monopoly’ rights by copyrights, patents and trademarks. This contribution will examine how competition law has approached free software by examining instances in which courts have had to deal with such initiatives, for instance in the Oracle Sun Systems merger, and the implications that these decisions have on free software initiatives. The presence or absence of corporate involvement in initiatives will be an important factor in this investigation, with it being posited that true instances of ‘commons-based peer production’ can still subvert the capitalist system, including perplexing its laws beyond intellectual property.
Resumo:
Recognized around the world as a powerful beacon for freedom, hope, and opportunity, the Statue of Liberty's light is not just metaphorical: her dramatic illumination is a perfect example of American ingenuity and engineering. Since the statue's installation in New York Harbor in 1886, lighting engineers and designers had struggled to illuminate the 150-foot copper-clad monument in a manner becoming an American icon. It took the thoughtful and creative approach of Howard Brandston-a legend in his own right-to solve this lighting challenge. In 1984, the designer was asked to give the statue a much-needed lighting makeover in preparation for its centennial. In order to avoid the shortcomings of previous attempts, he studied the monument from every angle and in all lighting conditions, discovering that it looked best in the light of dawn. Brandston determined that he would need 'one lamp to mimic the morning sun and one lamp to mimic the morning sky.' Learning that no existing lamps could simulate these conditions, Brandston partnered with General Electric to develop two new metal halide products. With only a short time for R&D, a team of engineers at GE's Nela Park laboratories assembled a 'top secret' testing room dedicated to the Statue of Liberty project. After nearly two years of work to perfect the new lamps, the 'dawn's early light' effect was finally achieved just days before the centennial celebrations were to take place in 1986. 'It was truly a labor of love,' he recalls.
Resumo:
Derailments due to lateral collisions between heavy road vehicles and passenger trains at level crossings (LCs) are serious safety issues. A variety of countermeasures in terms of traffic laws, communication technology and warning devices are used for minimising LC accidents; however, innovative civil infrastructure solution is rare. This paper presents a study of the efficacy of guard rail system (GRS) to minimise the derailment potential of trains laterally collided by heavy road vehicles at LCs. For this purpose, a three-dimensional dynamic model of a passenger train running on a ballasted track fitted with guard rail subject to lateral impact caused by a road truck is formulated. This model is capable of predicting the lateral collision-induced derailments with and without GRS. Based on dynamic simulations, derailment prevention mechanism of the GRS is illustrated. Sensitivities of key parameters of the GRS, such as the flange way width, the installation height and contact friction, to the efficacy of GRS are reported. It is shown that guard rails can enhance derailment safety against lateral impacts at LCs.
Resumo:
Aerosol particles in the atmosphere are known to significantly influence ecosystems, to change air quality and to exert negative health effects. Atmospheric aerosols influence climate through cooling of the atmosphere and the underlying surface by scattering of sunlight, through warming of the atmosphere by absorbing sun light and thermal radiation emitted by the Earth surface and through their acting as cloud condensation nuclei. Aerosols are emitted from both natural and anthropogenic sources. Depending on their size, they can be transported over significant distances, while undergoing considerable changes in their composition and physical properties. Their lifetime in the atmosphere varies from a few hours to a week. New particle formation is a result of gas-to-particle conversion. Once formed, atmospheric aerosol particles may grow due to condensation or coagulation, or be removed by deposition processes. In this thesis we describe analyses of air masses, meteorological parameters and synoptic situations to reveal conditions favourable for new particle formation in the atmosphere. We studied the concentration of ultrafine particles in different types of air masses, and the role of atmospheric fronts and cloudiness in the formation of atmospheric aerosol particles. The dominant role of Arctic and Polar air masses causing new particle formation was clearly observed at Hyytiälä, Southern Finland, during all seasons, as well as at other measurement stations in Scandinavia. In all seasons and on multi-year average, Arctic and North Atlantic areas were the sources of nucleation mode particles. In contrast, concentrations of accumulation mode particles and condensation sink values in Hyytiälä were highest in continental air masses, arriving at Hyytiälä from Eastern Europe and Central Russia. The most favourable situation for new particle formation during all seasons was cold air advection after cold-front passages. Such a period could last a few days until the next front reached Hyytiälä. The frequency of aerosol particle formation relates to the frequency of low-cloud-amount days in Hyytiälä. Cloudiness of less than 5 octas is one of the factors favouring new particle formation. Cloudiness above 4 octas appears to be an important factor that prevents particle growth, due to the decrease of solar radiation, which is one of the important meteorological parameters in atmospheric particle formation and growth. Keywords: Atmospheric aerosols, particle formation, air mass, atmospheric front, cloudiness
Resumo:
An efficient and statistically robust solution for the identification of asteroids among numerous sets of astrometry is presented. In particular, numerical methods have been developed for the short-term identification of asteroids at discovery, and for the long-term identification of scarcely observed asteroids over apparitions, a task which has been lacking a robust method until now. The methods are based on the solid foundation of statistical orbital inversion properly taking into account the observational uncertainties, which allows for the detection of practically all correct identifications. Through the use of dimensionality-reduction techniques and efficient data structures, the exact methods have a loglinear, that is, O(nlog(n)), computational complexity, where n is the number of included observation sets. The methods developed are thus suitable for future large-scale surveys which anticipate a substantial increase in the astrometric data rate. Due to the discontinuous nature of asteroid astrometry, separate sets of astrometry must be linked to a common asteroid from the very first discovery detections onwards. The reason for the discontinuity in the observed positions is the rotation of the observer with the Earth as well as the motion of the asteroid and the observer about the Sun. Therefore, the aim of identification is to find a set of orbital elements that reproduce the observed positions with residuals similar to the inevitable observational uncertainty. Unless the astrometric observation sets are linked, the corresponding asteroid is eventually lost as the uncertainty of the predicted positions grows too large to allow successful follow-up. Whereas the presented identification theory and the numerical comparison algorithm are generally applicable, that is, also in fields other than astronomy (e.g., in the identification of space debris), the numerical methods developed for asteroid identification can immediately be applied to all objects on heliocentric orbits with negligible effects due to non-gravitational forces in the time frame of the analysis. The methods developed have been successfully applied to various identification problems. Simulations have shown that the methods developed are able to find virtually all correct linkages despite challenges such as numerous scarce observation sets, astrometric uncertainty, numerous objects confined to a limited region on the celestial sphere, long linking intervals, and substantial parallaxes. Tens of previously unknown main-belt asteroids have been identified with the short-term method in a preliminary study to locate asteroids among numerous unidentified sets of single-night astrometry of moving objects, and scarce astrometry obtained nearly simultaneously with Earth-based and space-based telescopes has been successfully linked despite a substantial parallax. Using the long-term method, thousands of realistic 3-linkages typically spanning several apparitions have so far been found among designated observation sets each spanning less than 48 hours.
Resumo:
Fusion energy is a clean and safe solution for the intricate question of how to produce non-polluting and sustainable energy for the constantly growing population. The fusion process does not result in any harmful waste or green-house gases, since small amounts of helium is the only bi-product that is produced when using the hydrogen isotopes deuterium and tritium as fuel. Moreover, deuterium is abundant in seawater and tritium can be bred from lithium, a common metal in the Earth's crust, rendering the fuel reservoirs practically bottomless. Due to its enormous mass, the Sun has been able to utilize fusion as its main energy source ever since it was born. But here on Earth, we must find other means to achieve the same. Inertial fusion involving powerful lasers and thermonuclear fusion employing extreme temperatures are examples of successful methods. However, these have yet to produce more energy than they consume. In thermonuclear fusion, the fuel is held inside a tokamak, which is a doughnut-shaped chamber with strong magnets wrapped around it. Once the fuel is heated up, it is controlled with the help of these magnets, since the required temperatures (over 100 million degrees C) will separate the electrons from the nuclei, forming a plasma. Once the fusion reactions occur, excess binding energy is released as energetic neutrons, which are absorbed in water in order to produce steam that runs turbines. Keeping the power losses from the plasma low, thus allowing for a high number of reactions, is a challenge. Another challenge is related to the reactor materials, since the confinement of the plasma particles is not perfect, resulting in particle bombardment of the reactor walls and structures. Material erosion and activation as well as plasma contamination are expected. Adding to this, the high energy neutrons will cause radiation damage in the materials, causing, for instance, swelling and embrittlement. In this thesis, the behaviour of a material situated in a fusion reactor was studied using molecular dynamics simulations. Simulations of processes in the next generation fusion reactor ITER include the reactor materials beryllium, carbon and tungsten as well as the plasma hydrogen isotopes. This means that interaction models, {\it i.e. interatomic potentials}, for this complicated quaternary system are needed. The task of finding such potentials is nonetheless nearly at its end, since models for the beryllium-carbon-hydrogen interactions were constructed in this thesis and as a continuation of that work, a beryllium-tungsten model is under development. These potentials are combinable with the earlier tungsten-carbon-hydrogen ones. The potentials were used to explain the chemical sputtering of beryllium due to deuterium plasma exposure. During experiments, a large fraction of the sputtered beryllium atoms were observed to be released as BeD molecules, and the simulations identified the swift chemical sputtering mechanism, previously not believed to be important in metals, as the underlying mechanism. Radiation damage in the reactor structural materials vanadium, iron and iron chromium, as well as in the wall material tungsten and the mixed alloy tungsten carbide, was also studied in this thesis. Interatomic potentials for vanadium, tungsten and iron were modified to be better suited for simulating collision cascades that are formed during particle irradiation, and the potential features affecting the resulting primary damage were identified. Including the often neglected electronic effects in the simulations was also shown to have an impact on the damage. With proper tuning of the electron-phonon interaction strength, experimentally measured quantities related to ion-beam mixing in iron could be reproduced. The damage in tungsten carbide alloys showed elemental asymmetry, as the major part of the damage consisted of carbon defects. On the other hand, modelling the damage in the iron chromium alloy, essentially representing steel, showed that small additions of chromium do not noticeably affect the primary damage in iron. Since a complete assessment of the response of a material in a future full-scale fusion reactor is not achievable using only experimental techniques, molecular dynamics simulations are of vital help. This thesis has not only provided insight into complicated reactor processes and improved current methods, but also offered tools for further simulations. It is therefore an important step towards making fusion energy more than a future goal.
Resumo:
New stars form in dense interstellar clouds of gas and dust called molecular clouds. The actual sites where the process of star formation takes place are the dense clumps and cores deeply embedded in molecular clouds. The details of the star formation process are complex and not completely understood. Thus, determining the physical and chemical properties of molecular cloud cores is necessary for a better understanding of how stars are formed. Some of the main features of the origin of low-mass stars, like the Sun, are already relatively well-known, though many details of the process are still under debate. The mechanism through which high-mass stars form, on the other hand, is poorly understood. Although it is likely that the formation of high-mass stars shares many properties similar to those of low-mass stars, the very first steps of the evolutionary sequence are unclear. Observational studies of star formation are carried out particularly at infrared, submillimetre, millimetre, and radio wavelengths. Much of our knowledge about the early stages of star formation in our Milky Way galaxy is obtained through molecular spectral line and dust continuum observations. The continuum emission of cold dust is one of the best tracers of the column density of molecular hydrogen, the main constituent of molecular clouds. Consequently, dust continuum observations provide a powerful tool to map large portions across molecular clouds, and to identify the dense star-forming sites within them. Molecular line observations, on the other hand, provide information on the gas kinematics and temperature. Together, these two observational tools provide an efficient way to study the dense interstellar gas and the associated dust that form new stars. The properties of highly obscured young stars can be further examined through radio continuum observations at centimetre wavelengths. For example, radio continuum emission carries useful information on conditions in the protostar+disk interaction region where protostellar jets are launched. In this PhD thesis, we study the physical and chemical properties of dense clumps and cores in both low- and high-mass star-forming regions. The sources are mainly studied in a statistical sense, but also in more detail. In this way, we are able to examine the general characteristics of the early stages of star formation, cloud properties on large scales (such as fragmentation), and some of the initial conditions of the collapse process that leads to the formation of a star. The studies presented in this thesis are mainly based on molecular line and dust continuum observations. These are combined with archival observations at infrared wavelengths in order to study the protostellar content of the cloud cores. In addition, centimetre radio continuum emission from young stellar objects (YSOs; i.e., protostars and pre-main sequence stars) is studied in this thesis to determine their evolutionary stages. The main results of this thesis are as follows: i) filamentary and sheet-like molecular cloud structures, such as infrared dark clouds (IRDCs), are likely to be caused by supersonic turbulence but their fragmentation at the scale of cores could be due to gravo-thermal instability; ii) the core evolution in the Orion B9 star-forming region appears to be dynamic and the role played by slow ambipolar diffusion in the formation and collapse of the cores may not be significant; iii) the study of the R CrA star-forming region suggests that the centimetre radio emission properties of a YSO are likely to change with its evolutionary stage; iv) the IRDC G304.74+01.32 contains candidate high-mass starless cores which may represent the very first steps of high-mass star and star cluster formation; v) SiO outflow signatures are seen in several high-mass star-forming regions which suggest that high-mass stars form in a similar way as their low-mass counterparts, i.e., via disk accretion. The results presented in this thesis provide constraints on the initial conditions and early stages of both low- and high-mass star formation. In particular, this thesis presents several observational results on the early stages of clustered star formation, which is the dominant mode of star formation in our Galaxy.
Local numerical modelling of magnetoconvection and turbulence - implications for mean-field theories
Resumo:
During the last decades mean-field models, in which large-scale magnetic fields and differential rotation arise due to the interaction of rotation and small-scale turbulence, have been enormously successful in reproducing many of the observed features of the Sun. In the meantime, new observational techniques, most prominently helioseismology, have yielded invaluable information about the interior of the Sun. This new information, however, imposes strict conditions on mean-field models. Moreover, most of the present mean-field models depend on knowledge of the small-scale turbulent effects that give rise to the large-scale phenomena. In many mean-field models these effects are prescribed in ad hoc fashion due to the lack of this knowledge. With large enough computers it would be possible to solve the MHD equations numerically under stellar conditions. However, the problem is too large by several orders of magnitude for the present day and any foreseeable computers. In our view, a combination of mean-field modelling and local 3D calculations is a more fruitful approach. The large-scale structures are well described by global mean-field models, provided that the small-scale turbulent effects are adequately parameterized. The latter can be achieved by performing local calculations which allow a much higher spatial resolution than what can be achieved in direct global calculations. In the present dissertation three aspects of mean-field theories and models of stars are studied. Firstly, the basic assumptions of different mean-field theories are tested with calculations of isotropic turbulence and hydrodynamic, as well as magnetohydrodynamic, convection. Secondly, even if the mean-field theory is unable to give the required transport coefficients from first principles, it is in some cases possible to compute these coefficients from 3D numerical models in a parameter range that can be considered to describe the main physical effects in an adequately realistic manner. In the present study, the Reynolds stresses and turbulent heat transport, responsible for the generation of differential rotation, were determined along the mixing length relations describing convection in stellar structure models. Furthermore, the alpha-effect and magnetic pumping due to turbulent convection in the rapid rotation regime were studied. The third area of the present study is to apply the local results in mean-field models, which task we start to undertake by applying the results concerning the alpha-effect and turbulent pumping in mean-field models describing the solar dynamo.