932 resultados para inertial transformations
Resumo:
This paper is focused on the study of a vibrating system forced by a rotating unbalance and coupled to a tuned mass damper (TMD). The analysis of the dynamic response of the entire system is used to define the parameters of such device in order to achieve optimal damping properties. The inertial forcing due to the rotating unbalance depends quadratically on the forcing frequency and it leads to optimal tuning parameters that differ from classical values obtained for pure harmonic forcing. Analytical results demonstrate that frequency and damping ratios, as a function of the mass parameter, should be higher than classical optimal parameters. The analytical study is carried out for the undamped primary system, and numerically investigated for the damped primary system. We show that, for practical applications, proper TMD tuning allows to achieve a reduction in the steady-state response of about 20% with respect to the response achieved with a classically tuned damper. Copyright © 2015 by ASME.
Resumo:
Fungi have a fundamental role in carbon and nutrient transformations in the acids soils of boreal regions, such as peatlands, where high amounts of carbon (C) and nutrients are stored in peat, the pH is relatively low and the nutrient uptake of trees is highly dependent on mycorrhizae. In this thesis, the aim was to examine nitrogen (N) transformations and the availability of dissolved N compounds in forestry-drained peatlands, to compare the fungal community biomass and structure at various peat N levels, to investigate the growth of ectomycorrhizal fungi with variable P and K availability and to assess how the ectomycorrhizal fungi (ECM) affect N transformations. Both field and laboratory experiments were carried out. The peat N concentration did not affect the soil fungal community structure within a site. Phosphorus (P) and potassium (K) deficiency of the trees as well as the degree of decomposition and dissolved organic nitrogen (DON) concentration of the peat were shown to affect the fungal community structure and biomass of ECMs, highlighting the complexity of the below ground system on drained peatlands. The biomass of extrametrical mycorrhizal mycelia (EMM) was enhanced by P and/or K deficiency of the trees, and ECM biomass in the roots was increased by P deficiency. Thus, PK deficiency in drained peatlands may increase the allocation of C by the tree to ECMs. It was also observed that fungi can alter N mineralization processes in the rhizosphere but variously depending on fungal species and fertility level of peat. Gross N mineralization did not vary but the net N mineralization rate significantly increased along the N gradient in both field and laboratory experiments. Gross N immobilization also significantly increased when the peat N concentration increased. Nitrification was hardly detectable in either field or laboratory experiments. During the growing season, dissolved inorganic N (DIN) fluctuated much more than the relatively stable DON. Special methodological challenges associated with sampling and analysis in microbial studies on peatlands are discussed.
Resumo:
This chapter surveys the landscape of mobile dating and hookup apps—understood as media technologies, as businesses, and as sites of social practice. It situates the discussion within the broader contexts of technologically mediated dating and digital sexual cultures. By outlining a number of methodological approaches and data sources that can be used in the study of dating and hookup apps, it equips the reader with tools and approaches for investigating hookup app culture in ways that go beyond “media panics” – the familiar combination of moral panics and media effects which is so prevalent in discussions of sexuality in digital media.
Resumo:
The NUVIEW software package allows skeletal models of any double helical nucleic acid molecule to be displayed out a graphics monitor and to apply various rotations, translations and scaling transformations interactively, through the keyboard. The skeletal model is generated by connecting any pair of representative points, one from each of the bases in the basepair. In addition to the above mentioned manipulations, the base residues can be identified by using a locator and the distance between any pair of residues can be obtained. A sequence based color coded display allows easy identification of sequence repeats, such as runs of Adenines. The real time interactive manipulation of such skeletal models for large DNA/RNA double helices, can be used to trace the path of the nucleic acid chain in three dimensions and hence get a better idea of its topology, location of linear or curved regions, distances between far off regions in the sequence etc. A physical picture of these features will assist in understanding the relationship between base sequence, structure and biological function in nucleic acids.
Resumo:
Dispersing a data object into a set of data shares is an elemental stage in distributed communication and storage systems. In comparison to data replication, data dispersal with redundancy saves space and bandwidth. Moreover, dispersing a data object to distinct communication links or storage sites limits adversarial access to whole data and tolerates loss of a part of data shares. Existing data dispersal schemes have been proposed mostly based on various mathematical transformations on the data which induce high computation overhead. This paper presents a novel data dispersal scheme where each part of a data object is replicated, without encoding, into a subset of data shares according to combinatorial design theory. Particularly, data parts are mapped to points and data shares are mapped to lines of a projective plane. Data parts are then distributed to data shares using the point and line incidence relations in the plane so that certain subsets of data shares collectively possess all data parts. The presented scheme incorporates combinatorial design theory with inseparability transformation to achieve secure data dispersal at reduced computation, communication and storage costs. Rigorous formal analysis and experimental study demonstrate significant cost-benefits of the presented scheme in comparison to existing methods.
Resumo:
The formation of an ω-Al7Cu2Fe phase during laser cladding of quasicrystal-forming Al65Cu23.3Fe11.7 alloy on a pure aluminium substrate is reported. This phase is found to nucleate at the periphery of primary icosahedral-phase particles. A large number of ω-phase particles form an envelope around the icosahedral phase. On the outer side, they form an interface with an agr-Al solid solution. Detailed transmission electron microscopic observations show that the ω phase exhibits an orientation relationship with the icosahedral phase. Analysis of experimental results suggests that the ω phase forms by precipitation on an icosahedral phase by heterogeneous nucleation and grows into the aluminium-rich melt until supersaturation is exhausted. The microstructural observations are explained in terms of available models of phase transformations.
Resumo:
This study investigates the implications of the introduction of electric lighting systems, building technologies, and theories of worker efficiency on the deep spatial and environmental transformations that occurred within the corporate workplace during the twentieth century. Examining the shift from daylighting strategies to largely artificially lit workplace environments, this paper argues that electric lighting significantly contributed to the architectural rationalization of both office work and the modern office environment. Contesting the historical and critical marginalization of lighting within the discourse of the modern built environment, this study calls for a reassessment of the role of artificial lighting in the development of the modern corporate workplace. Keywords: daylighting, fluorescent lighting, rationalization, workplace design
Resumo:
Abrasion and slurry erosion behaviour of chromium-manganese iron samples with chromium (Cr) in the range similar to 16-19% and manganese (Mn) at 5 and 10% levels have been characterized for hardness followed by microstructural examination using optical and scanning electron microscopy. Positron lifetime studies have been conducted to understand the defects/microporosity influence on the microstructure. The samples were heat treated and characterized to understand the structural transformations in the matrix. The data reveals that hardness decreased with increase in Mn content from 5 to 10% in the first instance and then increase in the section size in the other case, irrespective of the sample conditions. The abrasion and slurry erosion losses show increase with increase in the section size as well as with increase in Mn content. The positron results show that as hardness increases from as-cast to heat treated sample, the positron trapping rate and hence defect concentration showed opposite trend as expected. So a good correlation between defects concentration and the hardness has been observed. These findings also corroborate well with the microstructural features obtained from optical and scanning electron microscopy. (C) 2009 Elsevier B. V. All rights reserved.
Resumo:
Modern-day weather forecasting is highly dependent on Numerical Weather Prediction (NWP) models as the main data source. The evolving state of the atmosphere with time can be numerically predicted by solving a set of hydrodynamic equations, if the initial state is known. However, such a modelling approach always contains approximations that by and large depend on the purpose of use and resolution of the models. Present-day NWP systems operate with horizontal model resolutions in the range from about 40 km to 10 km. Recently, the aim has been to reach operationally to scales of 1 4 km. This requires less approximations in the model equations, more complex treatment of physical processes and, furthermore, more computing power. This thesis concentrates on the physical parameterization methods used in high-resolution NWP models. The main emphasis is on the validation of the grid-size-dependent convection parameterization in the High Resolution Limited Area Model (HIRLAM) and on a comprehensive intercomparison of radiative-flux parameterizations. In addition, the problems related to wind prediction near the coastline are addressed with high-resolution meso-scale models. The grid-size-dependent convection parameterization is clearly beneficial for NWP models operating with a dense grid. Results show that the current convection scheme in HIRLAM is still applicable down to a 5.6 km grid size. However, with further improved model resolution, the tendency of the model to overestimate strong precipitation intensities increases in all the experiment runs. For the clear-sky longwave radiation parameterization, schemes used in NWP-models provide much better results in comparison with simple empirical schemes. On the other hand, for the shortwave part of the spectrum, the empirical schemes are more competitive for producing fairly accurate surface fluxes. Overall, even the complex radiation parameterization schemes used in NWP-models seem to be slightly too transparent for both long- and shortwave radiation in clear-sky conditions. For cloudy conditions, simple cloud correction functions are tested. In case of longwave radiation, the empirical cloud correction methods provide rather accurate results, whereas for shortwave radiation the benefit is only marginal. Idealised high-resolution two-dimensional meso-scale model experiments suggest that the reason for the observed formation of the afternoon low level jet (LLJ) over the Gulf of Finland is an inertial oscillation mechanism, when the large-scale flow is from the south-east or west directions. The LLJ is further enhanced by the sea-breeze circulation. A three-dimensional HIRLAM experiment, with a 7.7 km grid size, is able to generate a similar LLJ flow structure as suggested by the 2D-experiments and observations. It is also pointed out that improved model resolution does not necessary lead to better wind forecasts in the statistical sense. In nested systems, the quality of the large-scale host model is really important, especially if the inner meso-scale model domain is small.
Resumo:
Fusion energy is a clean and safe solution for the intricate question of how to produce non-polluting and sustainable energy for the constantly growing population. The fusion process does not result in any harmful waste or green-house gases, since small amounts of helium is the only bi-product that is produced when using the hydrogen isotopes deuterium and tritium as fuel. Moreover, deuterium is abundant in seawater and tritium can be bred from lithium, a common metal in the Earth's crust, rendering the fuel reservoirs practically bottomless. Due to its enormous mass, the Sun has been able to utilize fusion as its main energy source ever since it was born. But here on Earth, we must find other means to achieve the same. Inertial fusion involving powerful lasers and thermonuclear fusion employing extreme temperatures are examples of successful methods. However, these have yet to produce more energy than they consume. In thermonuclear fusion, the fuel is held inside a tokamak, which is a doughnut-shaped chamber with strong magnets wrapped around it. Once the fuel is heated up, it is controlled with the help of these magnets, since the required temperatures (over 100 million degrees C) will separate the electrons from the nuclei, forming a plasma. Once the fusion reactions occur, excess binding energy is released as energetic neutrons, which are absorbed in water in order to produce steam that runs turbines. Keeping the power losses from the plasma low, thus allowing for a high number of reactions, is a challenge. Another challenge is related to the reactor materials, since the confinement of the plasma particles is not perfect, resulting in particle bombardment of the reactor walls and structures. Material erosion and activation as well as plasma contamination are expected. Adding to this, the high energy neutrons will cause radiation damage in the materials, causing, for instance, swelling and embrittlement. In this thesis, the behaviour of a material situated in a fusion reactor was studied using molecular dynamics simulations. Simulations of processes in the next generation fusion reactor ITER include the reactor materials beryllium, carbon and tungsten as well as the plasma hydrogen isotopes. This means that interaction models, {\it i.e. interatomic potentials}, for this complicated quaternary system are needed. The task of finding such potentials is nonetheless nearly at its end, since models for the beryllium-carbon-hydrogen interactions were constructed in this thesis and as a continuation of that work, a beryllium-tungsten model is under development. These potentials are combinable with the earlier tungsten-carbon-hydrogen ones. The potentials were used to explain the chemical sputtering of beryllium due to deuterium plasma exposure. During experiments, a large fraction of the sputtered beryllium atoms were observed to be released as BeD molecules, and the simulations identified the swift chemical sputtering mechanism, previously not believed to be important in metals, as the underlying mechanism. Radiation damage in the reactor structural materials vanadium, iron and iron chromium, as well as in the wall material tungsten and the mixed alloy tungsten carbide, was also studied in this thesis. Interatomic potentials for vanadium, tungsten and iron were modified to be better suited for simulating collision cascades that are formed during particle irradiation, and the potential features affecting the resulting primary damage were identified. Including the often neglected electronic effects in the simulations was also shown to have an impact on the damage. With proper tuning of the electron-phonon interaction strength, experimentally measured quantities related to ion-beam mixing in iron could be reproduced. The damage in tungsten carbide alloys showed elemental asymmetry, as the major part of the damage consisted of carbon defects. On the other hand, modelling the damage in the iron chromium alloy, essentially representing steel, showed that small additions of chromium do not noticeably affect the primary damage in iron. Since a complete assessment of the response of a material in a future full-scale fusion reactor is not achievable using only experimental techniques, molecular dynamics simulations are of vital help. This thesis has not only provided insight into complicated reactor processes and improved current methods, but also offered tools for further simulations. It is therefore an important step towards making fusion energy more than a future goal.
Resumo:
In an estuary, mixing and dispersion resulting from turbulence and small scale fluctuation has strong spatio-temporal variability which cannot be resolved in conventional hydrodynamic models while some models employs parameterizations large water bodies. This paper presents small scale diffusivity estimates from high resolution drifters sampled at 10 Hz for periods of about 4 hours to resolve turbulence and shear diffusivity within a tidal shallow estuary (depth < 3 m). Taylor's diffusion theorem forms the basis of a first order estimate for the diffusivity scale. Diffusivity varied between 0.001 – 0.02 m2/s during the flood tide experiment. The diffusivity showed strong dependence (R2 > 0.9) on the horizontal mean velocity within the channel. Enhanced diffusivity caused by shear dispersion resulting from the interaction of large scale flow with the boundary geometries was observed. Turbulence within the shallow channel showed some similarities with the boundary layer flow which include consistency with slope of 5/3 predicted by Kolmogorov's similarity hypothesis within the inertial subrange. The diffusivities scale locally by 4/3 power law following Okubo's scaling and the length scale scales as 3/2 power law of the time scale. The diffusivity scaling herein suggests that the modelling of small scale mixing within tidal shallow estuaries can be approached from classical turbulence scaling upon identifying pertinent parameters.
Resumo:
With the level of digital disruption that is affecting businesses around the globe, you might expect high levels of Governance of Enterprise Information and Technology (GEIT) capability within boards. Boards and their senior executives know technology is important. More than 90% of boards and senior executives currently identify technology as essential to their current businesses, and to their organization’s future. But as few as 16% have sufficient GEIT capability. Global Centre for Digital Business Transformation’s recent research contains strong indicators of the need for change. Despite board awareness of both the likelihood and impact of digital disruption, things digital are still not viewed as a board-level matter in 45% of companies. And, it’s not just the board. The lack of board attention to technology can be mirrored at senior executive level as well. When asked about their organization’s attitude towards digital disruption, 43% of executives said their business either did not recognise it as a priority or was not responding appropriately. A further 32% were taking a “follower” approach, a potentially risky move as we will explain. Given all the evidence that boards know information and technology (I&T***) is vital, that they understand the inevitably, impact and speed of digital change and disruption, why are so many boards dragging their heels? Ignoring I&T disruption and refusing to build capability at board level is nothing short of negligence. Too many boards risk flying blind without GEIT capability [2]. To help build decision quality and I&T governance capability, this research: • Confirms a pressing need to build individual competency and cumulative, across-board capability in governing I&T • Identifies six factors that have rapidly increased the need, risk and urgency • Finds that boards may risk not meeting their duty of care responsibilities when it comes to I&T oversight • Highlights barriers to building capability details three GEIT competencies that boards and executives can use for evaluation, selection, recruitment and professional development.
Resumo:
Stereoselective synthesis of styryl lactone, (+)-7-epi-goniofufurone was achieved in high yield via simple transformations from tartaric acid. The key step involves the successive stereoselective reduction of ketones with borohydride and selectride.
Resumo:
The economic, political and social face of Europe has been changing rapidly in the past decades. These changes are unique in the history of Europe, but not without challenges for the nation states. The support for the European integration varies among the countries. In order to understand why certain developments or changes are perceived as threatening or as desired by different member countries, we must consider the social representations of the European integration on the national level: how the EU is represented to its citizens in media and in educational systems, particularly in the curricula and textbooks. The current study is concerned with the social representations of the European integration in the curricula and school textbooks in five European countries: France, Britain, Germany, Finland and Sweden. Besides that, the first volume of the common Franco-German history textbook was analyzed, since it has been seen as a model for a common European history textbook. As the collective representations, values and identities are dominantly mediated and imposed through media and educational systems, the national curricula and textbooks make an interesting starting point for the study of the European integration and of national and European identities. The social representations theory provides a comprehensive framework for the study of the European integration. By analyzing the curricula and history and civics textbooks of major educational publishers, the study aimed to demonstrate what is written on the European integration and how it is portrayed how the European integration is understood, made familiar and concretized in the educational context in the five European countries. To grasp the phenomenon of the European integration in the textbooks in its entirety, it was investigated from various perspectives. The two analysis methods of content analysis, the automatic analysis with ALCESTE and a more qualitative theory-driven content analysis, were carried out to give a more vivid and multifaceted picture of the object of the research. The analysis of the text was complemented with the analysis of visual material. Drawing on quantitative and qualitative methods, the contents, processes, visual images, transformations and structures of the social representations of European integration, as well as the communicative styles of the textbooks were examined. This study showed the divergent social representations of the European integration, anchored in the nation states, in the five member countries of the European Union. The social representations were constructed around different central core elements: French Europe in the French textbooks, Ambivalent Europe in the British textbooks, Influential and Unifying EU in the German textbooks, Enabling and Threatening EU in the Finnish textbooks, Sceptical EU in the Swedish textbooks and EU as a World Model in the Franco-German textbook. Some elements of the representations were shared by all countries such as peace and economic aspects of the European cooperation, whereas other elements of representations were found more frequently in some countries than in others, such as ideological, threatening or social components of the phenomenon European integration. The study also demonstrated the linkage between social representations of the EU and national and European identities. The findings of this study are applicable to the study of the European integration, to the study of education, as well as to the social representation theory.
Resumo:
The book presents a reconstruction, interpretation and critical evaluation of the Schumpeterian theoretical approach to socio-economic change. The analysis focuses on the problem of social evolution, on the interpretation of the innovation process and business cycles and, finally, on Schumpeter s optimistic neglect of ecological-environmental conditions as possible factors influencing social-economic change. The author investigates how the Schumpeterian approach describes the process of social and economic evolution, and how the logic of transformations is described, explained and understood in the Schumpeterian theory. The material of the study includes Schumpeter s works written after 1925, a related part of the commentary literature on these works, and a selected part of the related literature on the innovation process, technological transformations and the problem of long waves. Concerning the period after 1925, the Schumpeterian oeuvre is conceived and analysed as a more or less homogenous corpus of texts. The book is divided into 9 chapters. Chapters 1-2 describe the research problems and methods. Chapter 3 is an effort to provide a systematic reconstruction of Schumpeter's ideas concerning social and economic evolution. Chapters 4 and 5 focus their analysis on the innovation process. In Chapters 6 and 7 Schumpeter's theory of business cycles is examined. Chapter 8 evaluates Schumpeter's views concerning his relative neglect of ecological-environmental conditions as possible factors influencing social-economic change. Finally, chapter 9 draws the main conclusions.