748 resultados para Do-it-yourself work.


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The present work describes the tensile flow and work hardening behavior of a high strength 7010 aluminum alloy by constitutive relations. The alloy has been hot rolled by three different cross-rolling schedules. Room temperature tensile properties have been evaluated as a function of tensile axis orientation in the as-hot rolled as well as peak aged conditions. It is found that both the Ludwigson and a generalized Voce-Bergstrom relation adequately describe the tensile flow behavior of the present alloy in all conditions compared to the Hollomon relation. The variation in the Ludwigson fitting parameter could be correlated well with the microstructural features and anisotropic contribution of strengthening precipitates in the as-rolled and peak aged conditions, respectively. The hardening rate and the saturation stress of the first Voce-Bergstrom parameter, on the other hand, depend mainly on the crystallographic texture of the specimens. It is further shown that for the peak aged specimens the uniform elongation (epsilon(u)) derived from the Ludwigson relation matches well with the measured epsilon(u) irrespective of processing and loading directions. However, the Ludwigson fit overestimates the epsilon(u) in case of the as-rolled specimens. The Hollomon fit, on the other hand, predicts well the measured epsilon(u), of the as-rolled specimens but severely underestimates the epsilon(u), for the peak aged specimens. Contrarily, both the relations significantly overestimate the UTS of the as-rolled and the peak aged specimens. The Voce-Bergstrom parameters define the slope of e Theta-sigma plots in the stage-III regime when the specimens show a classical linear decrease in hardening rate in stage-III. Further analysis of work hardening behavior throws some light on the effect of texture on the dislocation storage and dynamic recovery.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Using idealized one-dimensional Eulerian hydrodynamic simulations, we contrast the behaviour of isolated supernovae with the superbubbles driven by multiple, collocated supernovae. Continuous energy injection via successive supernovae exploding within the hot/dilute bubble maintains a strong termination shock. This strong shock keeps the superbubble over-pressured and drives the outer shock well after it becomes radiative. Isolated supernovae, in contrast, with no further energy injection, become radiative quite early (less than or similar to 0.1Myr, tens of pc), and stall at scales less than or similar to 100 pc. We show that isolated supernovae lose almost all of their mechanical energy by 1 Myr, but superbubbles can retain up to similar to 40 per cent of the input energy in the form of mechanical energy over the lifetime of the star cluster (a few tens of Myr). These conclusions hold even in the presence of realistic magnetic fields and thermal conduction. We also compare various methods for implementing supernova feedback in numerical simulations. For various feedback prescriptions, we derive the spatial scale below which the energy needs to be deposited in order for it to couple to the interstellar medium. We show that a steady thermal wind within the superbubble appears only for a large number (greater than or similar to 10(4)) of supernovae. For smaller clusters, we expect multiple internal shocks instead of a smooth, dense thermalized wind.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Martensite-ferrite microstructures were produced in four microalloyed steels A (Fe-0.44C-Cr-V), B (Fe-0.26C-Cr-V), C (Fe-0.34C-Cr-Ti-V), and D (Fe-0.23C-Cr-V) by intercritical annealing. SEM analysis reveals that steels A and C contained higher martensite fraction and finer ferrite when compared to steels B and D which contained coarser ferrite grains and lower martensite fraction. A network of martensite phase surrounding the ferrite grains was found in all the steels. Crystallographic texture was very weak in these steels as indicated by EBSD analysis. The steels contained negligible volume fraction of retained austenite (approx. 3-6%). TEM analysis revealed the presence of twinned and lath martensite in these steels along with ferrite. Precipitates (carbides and nitrides) of Ti and V of various shapes with few nanometers size were found, particularly in the microstructures of steel B. Work hardening behavior of these steels at ambient temperature was evaluated through modified Jaoul-Crussard analysis, and it was characterized by two stages due to presence of martensite and ferrite phases in their microstructure. Steel A displayed large work hardening among other steel compositions. Work hardening behavior of the steels at a warm working temperature of 540 A degrees C was characterized by a single stage due to the decomposition of martensite into ferrite and carbides at this temperature as indicated by SEM images of the steels after warm deformation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Several operational aspects for thermal power plants in general are non-intuitive and involve simultaneous optimization of a number of operational parameters. In the case of solar operated power plants, it is even more difficult due to varying heat source temperatures induced by variability in insolation levels. This paper introduces a quantitative methodology for load regulation of a CO2 based Brayton cycle power plant using the `thermal efficiency and specific work output' coordinate system. The analysis shows that a transcritical CO2 cycle offers more flexibility under part load performance than the supercritical cycle in case of non-solar power plants. However, for concentrated solar power, where efficiency is important, supercritical CO2 cycle fares better than transcritical CO2 cycle. A number of empirical equations relating heat source temperature, high side pressure with efficiency and specific work output are proposed which could assist in generating control algorithms. (C) 2015 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Essential work of fracture (EWF) analysis is used to study the effect of the silica doping level on fracture toughness of polyimide/silica (PI/SiO2) hybrid films. By using double-edge-notched-tension (DENT) specimens with different ligament lengths, it seems that the introduction of silica additive can improve the specific essential work of fracture (w (e) ) of PI thin films, but the specific non-essential work of fracture (beta w (p) ) will decease significantly as the silica doping level increasing from 1 to 5 wt.%, and even lower than that of neat PI. The failure process of the fracture is investigated with online scanning electron microscope (SEM) observation and the parameters of non-essential work of fracture, beta and w (p) , are calculated based on finite element (FE) method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Three adhesion contact models, JKR (Johnson-Kendall-Roberts), DMT (Derjaguin-Muller-Toporov) and MD (Maugis-Dugdale) are compared with the Hertz model in dealing with the nano-contact problems. It has been shown that the dimensionless load parameter, $\bar{P}=P/(\pi\Delta\gamma R)$, and the transition parameter, $\Lambda$, have significant influences on the contact stiffness (contact area) at micro/nano-scale and should not be ignored in shallow nanoindentation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The relationship between hardness (H), reduced modulus (E-r), unloading work (W-u), and total work (W-t) of indentation is examined in detail experimentally and theoretically. Experimental study verifies the approximate linear relationship. Theoretical analysis confirms it. Furthermore, the solutions to the conical indentation in elastic-perfectly plastic solid, including elastic work (W-e), H, W-t, and W-u are obtained using Johnson's expanding cavity model and Lame solution. Consequently, it is found that the W-e should be distinguished from W-u, rather than their equivalence as suggested in ISO14577, and (H/E-r)/(W-u/W-t) depends mainly on the conical angle, which are also verified with numerical simulations. (C) 2008 American Institute of Physics.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Screen-viewing has been associated with increased body mass, increased risk of metabolic syndrome and lower psychological well-being among children and adolescents. There is a shortage of information about the nature of contemporary screen-viewing amongst children especially given the rapid advances in screen-viewing equipment technology and their widespread availability. Anecdotal evidence suggests that large numbers of children embrace the multi-functionality of current devices to engage in multiple forms of screen-viewing at the same time. In this paper we used qualitative methods to assess the nature and extent of multiple forms of screen-viewing in UK children. Methods: Focus groups were conducted with 10-11 year old children (n = 63) who were recruited from five primary schools in Bristol, UK. Topics included the types of screen-viewing in which the participants engaged; whether the participants ever engaged in more than one form of screen-viewing at any time and if so the nature of this multiple viewing; reasons for engaging in multi-screen-viewing; the room within the house where multi-screen-viewing took place and the reasons for selecting that room. All focus groups were transcribed verbatim, anonymised and thematically analysed. Results: Multi-screen viewing was a common behaviour. Although multi-screen viewing often involved watching TV, TV viewing was often the background behaviour with attention focussed towards a laptop, handheld device or smart-phone. There were three main reasons for engaging in multi-screen viewing: 1) tempering impatience that was associated with a programme loading; 2) multi-screen facilitated filtering out unwanted content such as advertisements; and 3) multi-screen viewing was perceived to be enjoyable. Multi-screen viewing occurred either in the child's bedroom or in the main living area of the home. There was considerable variability in the level and timing of viewing and this appeared to be a function of whether the participants attended after-school clubs. Conclusions: UK children regularly engage in two or more forms of screen-viewing at the same time. There are currently no means of assessing multi-screen viewing nor any interventions that specifically focus on reducing multi-screen viewing. To reduce children's overall screen-viewing we need to understand and then develop approaches to reduce multi-screen viewing among children.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This guidebook attempts to provide a quick overview of the Work in Fishing Convention, 2007, which was adopted in Geneva, Switzerland, in June 2007 at the 96th International Labour Conference (ILC) of the International Labour Organization (ILO). It does not purport to provide interpretation of any provisions of the Convention and should not in any way be treated as a substitute for the actual provisions it contains. This guidebook is intended mainly to help those unfamiliar with the Convention and the working of the ILO and the ILC, gain some understanding of the relevant issues. In particular, it is hoped that the guidebook will aid fish workers and their organizations understand the possible benefits and implications of the Convention for artisanal and small-scale fisheries in developing countries.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Last year, Jisc began work with EDUCAUSE - the US organisation for IT professionals in higher education - to find out the skillset of the CIO of the future. One of the findings of our project was that many aspiring technology leaders find it difficult to make the step up. Louisa Dale, director Jisc group sector intelligence, talks through the learnings and opens a call for IT professionals to get involved in the next phase of work.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This Q and A provides an overview of copyright law and how it applies in a variety of scenarios relevant to work based learning (WBL) providers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The effective stress principle has been efficiently applied to saturated soils in the soil mechanics and geotechnical engineering practice; however, its applicability to unsaturated soils is still under debate. The appropriate selection of stress state variables is essential for the construction of constitutive models for unsaturated soils. Owing to the complexity of unsaturated soils, it is difficult to determine the deformation and strength behaviors of unsaturated soils uniquely with the previous single-effective-stress variable theory and two-effective-stress-variable theory in all the situations. In this paper, based on the porous media theory, the specific expression of work is proposed, and the effective stress of unsaturated soils conjugated with the displacement of the soil skeleton is further derived. In the derived work and energy balance equations, the energy dissipation in unsaturated soils is taken into account. According to the derived work and energy balance equations, all of the three generalized stresses and the conjugated strains have effects on the deformation of unsaturated soils. For considering these effects, a principle of generalized effective stress to describe the behaviors of unsaturated soils is proposed. The proposed principle of generalized effective stress may reduce to the previous effective stress theory of single-stress variable or the two-stress variables under certain conditions. This principle provides a helpful reference for the development of constitutive models for unsaturated soils.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The search for reliable proxies of past deep ocean temperature and salinity has proved difficult, thereby limiting our ability to understand the coupling of ocean circulation and climate over glacial-interglacial timescales. Previous inferences of deep ocean temperature and salinity from sediment pore fluid oxygen isotopes and chlorinity indicate that the deep ocean density structure at the Last Glacial Maximum (LGM, approximately 20,000 years BP) was set by salinity, and that the density contrast between northern and southern sourced deep waters was markedly greater than in the modern ocean. High density stratification could help explain the marked contrast in carbon isotope distribution recorded in the LGM ocean relative to that we observe today, but what made the ocean's density structure so different at the LGM? How did it evolve from one state to another? Further, given the sparsity of the LGM temperature and salinity data set, what else can we learn by increasing the spatial density of proxy records?

We investigate the cause and feasibility of a highly and salinity stratified deep ocean at the LGM and we work to increase the amount of information we can glean about the past ocean from pore fluid profiles of oxygen isotopes and chloride. Using a coupled ocean--sea ice--ice shelf cavity model we test whether the deep ocean density structure at the LGM can be explained by ice--ocean interactions over the Antarctic continental shelves, and show that a large contribution of the LGM salinity stratification can be explained through lower ocean temperature. In order to extract the maximum information from pore fluid profiles of oxygen isotopes and chloride we evaluate several inverse methods for ill-posed problems and their ability to recover bottom water histories from sediment pore fluid profiles. We demonstrate that Bayesian Markov Chain Monte Carlo parameter estimation techniques enable us to robustly recover the full solution space of bottom water histories, not only at the LGM, but through the most recent deglaciation and the Holocene up to the present. Finally, we evaluate a non-destructive pore fluid sampling technique, Rhizon samplers, in comparison to traditional squeezing methods and show that despite their promise, Rhizons are unlikely to be a good sampling tool for pore fluid measurements of oxygen isotopes and chloride.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the quest to develop viable designs for third-generation optical interferometric gravitational-wave detectors, one strategy is to monitor the relative momentum or speed of the test-mass mirrors, rather than monitoring their relative position. The most straightforward design for a speed-meter interferometer that accomplishes this is described and analyzed in Chapter 2. This design (due to Braginsky, Gorodetsky, Khalili, and Thorne) is analogous to a microwave-cavity speed meter conceived by Braginsky and Khalili. A mathematical mapping between the microwave speed meter and the optical interferometric speed meter is developed and used to show (in accord with the speed being a quantum nondemolition observable) that in principle the interferometric speed meter can beat the gravitational-wave standard quantum limit (SQL) by an arbitrarily large amount, over an arbitrarily wide range of frequencies . However, in practice, to reach or beat the SQL, this specific speed meter requires exorbitantly high input light power. The physical reason for this is explored, along with other issues such as constraints on performance due to optical dissipation.

Chapter 3 proposes a more sophisticated version of a speed meter. This new design requires only a modest input power and appears to be a fully practical candidate for third-generation LIGO. It can beat the SQL (the approximate sensitivity of second-generation LIGO interferometers) over a broad range of frequencies (~ 10 to 100 Hz in practice) by a factor h/hSQL ~ √W^(SQL)_(circ)/Wcirc. Here Wcirc is the light power circulating in the interferometer arms and WSQL ≃ 800 kW is the circulating power required to beat the SQL at 100 Hz (the LIGO-II power). If squeezed vacuum (with a power-squeeze factor e-2R) is injected into the interferometer's output port, the SQL can be beat with a much reduced laser power: h/hSQL ~ √W^(SQL)_(circ)/Wcirce-2R. For realistic parameters (e-2R ≃ 10 and Wcirc ≃ 800 to 2000 kW), the SQL can be beat by a factor ~ 3 to 4 from 10 to 100 Hz. [However, as the power increases in these expressions, the speed meter becomes more narrow band; additional power and re-optimization of some parameters are required to maintain the wide band.] By performing frequency-dependent homodyne detection on the output (with the aid of two kilometer-scale filter cavities), one can markedly improve the interferometer's sensitivity at frequencies above 100 Hz.

Chapters 2 and 3 are part of an ongoing effort to develop a practical variant of an interferometric speed meter and to combine the speed meter concept with other ideas to yield a promising third- generation interferometric gravitational-wave detector that entails low laser power.

Chapter 4 is a contribution to the foundations for analyzing sources of gravitational waves for LIGO. Specifically, it presents an analysis of the tidal work done on a self-gravitating body (e.g., a neutron star or black hole) in an external tidal field (e.g., that of a binary companion). The change in the mass-energy of the body as a result of the tidal work, or "tidal heating," is analyzed using the Landau-Lifshitz pseudotensor and the local asymptotic rest frame of the body. It is shown that the work done on the body is gauge invariant, while the body-tidal-field interaction energy contained within the body's local asymptotic rest frame is gauge dependent. This is analogous to Newtonian theory, where the interaction energy is shown to depend on how one localizes gravitational energy, but the work done on the body is independent of that localization. These conclusions play a role in analyses, by others, of the dynamics and stability of the inspiraling neutron-star binaries whose gravitational waves are likely to be seen and studied by LIGO.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Energy and sustainability have become one of the most critical issues of our generation. While the abundant potential of renewable energy such as solar and wind provides a real opportunity for sustainability, their intermittency and uncertainty present a daunting operating challenge. This thesis aims to develop analytical models, deployable algorithms, and real systems to enable efficient integration of renewable energy into complex distributed systems with limited information.

The first thrust of the thesis is to make IT systems more sustainable by facilitating the integration of renewable energy into these systems. IT represents the fastest growing sectors in energy usage and greenhouse gas pollution. Over the last decade there are dramatic improvements in the energy efficiency of IT systems, but the efficiency improvements do not necessarily lead to reduction in energy consumption because more servers are demanded. Further, little effort has been put in making IT more sustainable, and most of the improvements are from improved "engineering" rather than improved "algorithms". In contrast, my work focuses on developing algorithms with rigorous theoretical analysis that improve the sustainability of IT. In particular, this thesis seeks to exploit the flexibilities of cloud workloads both (i) in time by scheduling delay-tolerant workloads and (ii) in space by routing requests to geographically diverse data centers. These opportunities allow data centers to adaptively respond to renewable availability, varying cooling efficiency, and fluctuating energy prices, while still meeting performance requirements. The design of the enabling algorithms is however very challenging because of limited information, non-smooth objective functions and the need for distributed control. Novel distributed algorithms are developed with theoretically provable guarantees to enable the "follow the renewables" routing. Moving from theory to practice, I helped HP design and implement industry's first Net-zero Energy Data Center.

The second thrust of this thesis is to use IT systems to improve the sustainability and efficiency of our energy infrastructure through data center demand response. The main challenges as we integrate more renewable sources to the existing power grid come from the fluctuation and unpredictability of renewable generation. Although energy storage and reserves can potentially solve the issues, they are very costly. One promising alternative is to make the cloud data centers demand responsive. The potential of such an approach is huge.

To realize this potential, we need adaptive and distributed control of cloud data centers and new electricity market designs for distributed electricity resources. My work is progressing in both directions. In particular, I have designed online algorithms with theoretically guaranteed performance for data center operators to deal with uncertainties under popular demand response programs. Based on local control rules of customers, I have further designed new pricing schemes for demand response to align the interests of customers, utility companies, and the society to improve social welfare.