937 resultados para Time scales
Resumo:
There has been a developing interest in smart grids, the possibility of significantly enhanced performance from remote measurements and intelligent controls. For transmission the use of PMU signals from remote sites and direct load shed controls can give significant enhancement for large system disturbances rather than relying on local measurements and linear controls. This lecture will emphasize what can be found from remote measurements and the mechanisms to get a smarter response to major disturbances. For distribution systems there has been a significant history in the area of distribution reconfiguration automation. This lecture will emphasize the incorporation of Distributed Generation into distribution networks and the impact on voltage/frequency control and protection. Overall the performance of both transmission and distribution will be impacted by demand side management and the capabilities built into the system. In particular, we consider different time scales of load communication and response and look to the benefits for system, energy and lines.
Resumo:
Recent research on particle size distributions and particle concentrations near a busy road cannot be explained by the conventional mechanisms for particle evolution of combustion aerosols. Specifically they appear to be inadequate to explain the experimental observations of particle transformation and the evolution of the total number concentration. This resulted in the development of a new mechanism based on their thermal fragmentation, for the evolution of combustion aerosol nano-particles. A complex and comprehensive pattern of evolution of combustion aerosols, involving particle fragmentation, was then proposed and justified. In that model it was suggested that thermal fragmentation occurs in aggregates of primary particles each of which contains a solid graphite/carbon core surrounded by volatile molecules bonded to the core by strong covalent bonds. Due to the presence of strong covalent bonds between the core and the volatile (frill) molecules, such primary composite particles can be regarded as solid, despite the presence of significant (possibly, dominant) volatile component. Fragmentation occurs when weak van der Waals forces between such primary particles are overcome by their thermal (Brownian) motion. In this work, the accepted concept of thermal fragmentation is advanced to determine whether fragmentation is likely in liquid composite nano-particles. It has been demonstrated that at least at some stages of evolution, combustion aerosols contain a large number of composite liquid particles containing presumably several components such as water, oil, volatile compounds, and minerals. It is possible that such composite liquid particles may also experience thermal fragmentation and thus contribute to, for example, the evolution of the total number concentration as a function of distance from the source. Therefore, the aim of this project is to examine theoretically the possibility of thermal fragmentation of composite liquid nano-particles consisting of immiscible liquid v components. The specific focus is on ternary systems which include two immiscible liquid droplets surrounded by another medium (e.g., air). The analysis shows that three different structures are possible, the complete encapsulation of one liquid by the other, partial encapsulation of the two liquids in a composite particle, and the two droplets separated from each other. The probability of thermal fragmentation of two coagulated liquid droplets is discussed and examined for different volumes of the immiscible fluids in a composite liquid particle and their surface and interfacial tensions through the determination of the Gibbs free energy difference between the coagulated and fragmented states, and comparison of this energy difference with the typical thermal energy kT. The analysis reveals that fragmentation was found to be much more likely for a partially encapsulated particle than a completely encapsulated particle. In particular, it was found that thermal fragmentation was much more likely when the volume ratio of the two liquid droplets that constitute the composite particle are very different. Conversely, when the two liquid droplets are of similar volumes, the probability of thermal fragmentation is small. It is also demonstrated that the Gibbs free energy difference between the coagulated and fragmented states is not the only important factor determining the probability of thermal fragmentation of composite liquid particles. The second essential factor is the actual structure of the composite particle. It is shown that the probability of thermal fragmentation is also strongly dependent on the distance that each of the liquid droplets should travel to reach the fragmented state. In particular, if this distance is larger than the mean free path for the considered droplets in the air, the probability of thermal fragmentation should be negligible. In particular, it follows form here that fragmentation of the composite particle in the state with complete encapsulation is highly unlikely because of the larger distance that the two droplets must travel in order to separate. The analysis of composite liquid particles with the interfacial parameters that are expected in combustion aerosols demonstrates that thermal fragmentation of these vi particles may occur, and this mechanism may play a role in the evolution of combustion aerosols. Conditions for thermal fragmentation to play a significant role (for aerosol particles other than those from motor vehicle exhaust) are determined and examined theoretically. Conditions for spontaneous transformation between the states of composite particles with complete and partial encapsulation are also examined, demonstrating the possibility of such transformation in combustion aerosols. Indeed it was shown that for some typical components found in aerosols that transformation could take place on time scales less than 20 s. The analysis showed that factors that influenced surface and interfacial tension played an important role in this transformation process. It is suggested that such transformation may, for example, result in a delayed evaporation of composite particles with significant water component, leading to observable effects in evolution of combustion aerosols (including possible local humidity maximums near a source, such as a busy road). The obtained results will be important for further development and understanding of aerosol physics and technologies, including combustion aerosols and their evolution near a source.
Resumo:
This manuscript took a 'top down' approach to understanding survival of inhabitant cells in the ecosystem bone, working from higher to lower length and time scales through the hierarchical ecosystem of bone. Our working hypothesis is that nature “engineered” the skeleton using a 'bottom up' approach,where mechanical properties of cells emerge from their adaptation to their local me-chanical milieu. Cell aggregation and formation of higher order anisotropic struc- ture results in emergent architectures through cell differentiation and extracellular matrix secretion. These emergent properties, including mechanical properties and architecture, result in mechanical adaptation at length scales and longer time scales which are most relevant for the survival of the vertebrate organism [Knothe Tate and von Recum 2009]. We are currently using insights from this approach to har-ness nature’s regeneration potential and to engineer novel mechanoactive materials [Knothe Tate et al. 2007, Knothe Tate et al. 2009]. In addition to potential applications of these exciting insights, these studies may provide important clues to evolution and development of vertebrate animals. For instance, one might ask why mesenchymal stem cells condense at all? There is a putative advantage to self-assembly and cooperation, but this advantage is somewhat outweighed by the need for infrastructural complexity (e.g., circulatory systems comprised of specific differentiated cell types which in turn form conduits and pumps to overcome limitations of mass transport via diffusion, for example; dif-fusion is untenable for multicellular organisms larger than 250 microns in diameter. A better question might be: Why do cells build skeletal tissue? Once cooperatingcells in tissues begin to deplete local sources of food in their aquatic environment, those that have evolved a means to locomote likely have an evolutionary advantage. Once the environment becomes less aquarian and more terrestrial, self-assembled organisms with the ability to move on land might have conferred evolutionary ad-vantages as well. So did the cytoskeleton evolve several length scales, enabling the emergence of skeletal architecture for vertebrate animals? Did the evolutionary advantage of motility over noncompliant terrestrial substrates (walking on land) favor adaptations including emergence of intracellular architecture (changes in the cytoskeleton and upregulation of structural protein manufacture), inter-cellular con- densation, mineralization of tissues, and emergence of higher order architectures?How far does evolutionary Darwinism extend and how can we exploit this knowl- edge to engineer smart materials and architectures on Earth and new, exploratory environments?[Knothe Tate et al. 2008]. We are limited only by our ability to imagine. Ultimately, we aim to understand nature, mimic nature, guide nature and/or exploit nature’s engineering paradigms without engineer-ing ourselves out of existence.
Resumo:
Large igneous provinces (LIPs) are sites of the most frequently recurring, largest volume basaltic and silicic eruptions in Earth history. These large-volume (N1000 km3 dense rock equivalent) and large-magnitude (NM8) eruptions produce areally extensive (104–105 km2) basaltic lava flow fields and silicic ignimbrites that are the main building blocks of LIPs. Available information on the largest eruptive units are primarily from the Columbia River and Deccan provinces for the dimensions of flood basalt eruptions, and the Paraná–Etendeka and Afro-Arabian provinces for the silicic ignimbrite eruptions. In addition, three large-volume (675– 2000 km3) silicic lava flows have also been mapped out in the Proterozoic Gawler Range province (Australia), an interpreted LIP remnant. Magma volumes of N1000 km3 have also been emplaced as high-level basaltic and rhyolitic sills in LIPs. The data sets indicate comparable eruption magnitudes between the basaltic and silicic eruptions, but due to considerable volumes residing as co-ignimbrite ash deposits, the current volume constraints for the silicic ignimbrite eruptions may be considerably underestimated. Magma composition thus appears to be no barrier to the volume of magma emitted during an individual eruption. Despite this general similarity in magnitude, flood basaltic and silicic eruptions are very different in terms of eruption style, duration, intensity, vent configuration, and emplacement style. Flood basaltic eruptions are dominantly effusive and Hawaiian–Strombolian in style, with magma discharge rates of ~106–108 kg s−1 and eruption durations estimated at years to tens of years that emplace dominantly compound pahoehoe lava flow fields. Effusive and fissural eruptions have also emplaced some large-volume silicic lavas, but discharge rates are unknown, and may be up to an order of magnitude greater than those of flood basalt lava eruptions for emplacement to be on realistic time scales (b10 years). Most silicic eruptions, however, are moderately to highly explosive, producing co-current pyroclastic fountains (rarely Plinian) with discharge rates of 109– 1011 kg s−1 that emplace welded to rheomorphic ignimbrites. At present, durations for the large-magnitude silicic eruptions are unconstrained; at discharge rates of 109 kg s−1, equivalent to the peak of the 1991 Mt Pinatubo eruption, the largest silicic eruptions would take many months to evacuate N5000 km3 of magma. The generally simple deposit structure is more suggestive of short-duration (hours to days) and high intensity (~1011 kg s−1) eruptions, perhaps with hiatuses in some cases. These extreme discharge rates would be facilitated by multiple point, fissure and/or ring fracture venting of magma. Eruption frequencies are much elevated for large-magnitude eruptions of both magma types during LIP-forming episodes. However, in basaltdominated provinces (continental and ocean basin flood basalt provinces, oceanic plateaus, volcanic rifted margins), large magnitude (NM8) basaltic eruptions have much shorter recurrence intervals of 103–104 years, whereas similar magnitude silicic eruptions may have recurrence intervals of up to 105 years. The Paraná– Etendeka province was the site of at least nine NM8 silicic eruptions over an ~1 Myr period at ~132 Ma; a similar eruption frequency, although with a fewer number of silicic eruptions is also observed for the Afro- Arabian Province. The huge volumes of basaltic and silicic magma erupted in quick succession during LIP events raises several unresolved issues in terms of locus of magma generation and storage (if any) in the crust prior to eruption, and paths and rates of ascent from magma reservoirs to the surface.
Resumo:
The low stream salinity naturally in the Nebine-Mungallala Catchment, extent of vegetation retention, relatively low rainfall and high evaporation indicates that there is a relatively low risk of rising shallow groundwater tables in the catchment. Scalding caused by wind and water erosion exposing highly saline sub-soils is a more important regional issue, such as in the Homeboin area. Local salinisation associated with evaporation of bore water from free flowing bore drains and bores is also an important land degradation issue particularly in the lower Nebine, Wallam and Mungallala Creeks. The replacement of free flowing artesian bores and bore drains with capped bores and piped water systems under the Great Artesian Basin bore rehabilitation program is addressing local salinisation and scalding in the vicinity of bore drains and preventing the discharge of saline bore water to streams. Three principles for the prevention and control of salinity in the Nebine Mungallala catchment have been identified in this review: • Avoid salinity through avoiding scalds – i.e. not exposing the near-surface salt in landscape through land degradation; • Riparian zone management: Scalding often occurs within 200m or so of watering lines. Natural drainage lines are most likely to be overstocked, and thus have potential for scalding. Scalding begins when vegetation is removed, and without that binding cover, wind and water erosion exposes the subsoil; and • Monitoring of exposed or grazed soil areas. Based on the findings of the study, we make the following recommendations: 1. Undertake a geotechnical study of existing maps and other data to help identify and target areas most at risk of rising water tables causing salinity. Selected monitoring should then be established using piezometers as an early warning system. 2. SW NRM should financially support scald reclamation activity through its various funding programs. However, for this to have any validity in the overall management of salinity risk, it is critical that such funding require the landholder to undertake a salinity hazard/risk assessment of his/her holding. 3. A staged approach to funding may be appropriate. In the first instance, it would be reasonable to commence funding some pilot scald reclamation work with a view to further developing and piloting the farm hazard/risk assessment tools, and exploring how subsequent grazing management strategies could be incorporated within other extension and management activities. Once the details of the necessary farm level activities have been more clearly defined, and following the outcomes of the geotechnical review recommended above, a more comprehensive funding package could be rolled out to priority areas. 4. We recommend that best-practice grazing management training currently on offer should be enhanced with information about salinity risk in scald-prone areas, and ways of minimising the likelihood of scald formation. 5. We recommend that course material be developed for local students in Years 6 and 7, and that arrangements be made with local schools to present this information. Given the constraints of existing syllabi, we envisage that negotiations may have to be undertaken with the Department of Education in order for this material to be permitted to be used. We have contact with key people who could help in this if required. 6. We recommend that SW NRM continue to support existing extension activities such as Grazing Land Management and the Monitoring Made Easy tools. These aids should be able to be easily expanding to incorporate techniques for monitoring, addressing and preventing salinity and scalding. At the time of writing staff of SW NRM were actively involved in this process. It is important that these activities are adequately resourced to facilitate the uptake by landholders of the perception that salinity is an issue that needs to be addressed as part of everyday management. 7. We recommend that SW NRM consider investing in the development and deployment of a scenario-modelling learning support tool as part of the awareness raising and education activities. Secondary salinity is a dynamic process that results from ongoing human activity which mobilises and/or exposes salt occurring naturally in the landscape. Time scales can be short to very long, and the benefits of management actions can similarly have immediate or very long time frames. One way to help explain the dynamics of these processes is through scenario modelling.
Resumo:
Recently the application of the quasi-steady-state approximation (QSSA) to the stochastic simulation algorithm (SSA) was suggested for the purpose of speeding up stochastic simulations of chemical systems that involve both relatively fast and slow chemical reactions [Rao and Arkin, J. Chem. Phys. 118, 4999 (2003)] and further work has led to the nested and slow-scale SSA. Improved numerical efficiency is obtained by respecting the vastly different time scales characterizing the system and then by advancing only the slow reactions exactly, based on a suitable approximation to the fast reactions. We considerably extend these works by applying the QSSA to numerical methods for the direct solution of the chemical master equation (CME) and, in particular, to the finite state projection algorithm [Munsky and Khammash, J. Chem. Phys. 124, 044104 (2006)], in conjunction with Krylov methods. In addition, we point out some important connections to the literature on the (deterministic) total QSSA (tQSSA) and place the stochastic analogue of the QSSA within the more general framework of aggregation of Markov processes. We demonstrate the new methods on four examples: Michaelis–Menten enzyme kinetics, double phosphorylation, the Goldbeter–Koshland switch, and the mitogen activated protein kinase cascade. Overall, we report dramatic improvements by applying the tQSSA to the CME solver.
Resumo:
The opening phrase of the title is from Charles Darwin’s notebooks (Schweber 1977). It is a double reminder, firstly that mainstream evolutionary theory is not just about describing nature but is particularly looking for mechanisms or ‘causes’, and secondly, that there will usually be several causes affecting any particular outcome. The second part of the title is our concern at the almost universal rejection of the idea that biological mechanisms are sufficient for macroevolutionary changes, thus rejecting a cornerstone of Darwinian evolutionary theory. Our primary aim here is to consider ways of making it easier to develop and to test hypotheses about evolution. Formalizing hypotheses can help generate tests. In an absolute sense, some of the discussion by scientists about evolution is little better than the lack of reasoning used by those advocating intelligent design. Our discussion here is in a Popperian framework where science is defined by that area of study where it is possible, in principle, to find evidence against hypotheses – they are in principle falsifiable. However, with time, the boundaries of science keep expanding. In the past, some aspects of evolution were outside the current boundaries of falsifiable science, but increasingly new techniques and ideas are expanding the boundaries of science and it is appropriate to re-examine some topics. It often appears that over the last few decades there has been an increasingly strong assumption to look first (and only) for a physical cause. This decision is virtually never formally discussed, just an assumption is made that some physical factor ‘drives’ evolution. It is necessary to examine our assumptions much more carefully. What is meant by physical factors ‘driving’ evolution, or what is an ‘explosive radiation’. Our discussion focuses on two of the six mass extinctions, the fifth being events in the Late Cretaceous, and the sixth starting at least 50,000 years ago (and is ongoing). Cretaceous/Tertiary boundary; the rise of birds and mammals. We have had a long-term interest (Cooper and Penny 1997) in designing tests to help evaluate whether the processes of microevolution are sufficient to explain macroevolution. The real challenge is to formulate hypotheses in a testable way. For example the numbers of lineages of birds and mammals that survive from the Cretaceous to the present is one test. Our first estimate was 22 for birds, and current work is tending to increase this value. This still does not consider lineages that survived into the Tertiary, and then went extinct later. Our initial suggestion was probably too narrow in that it lumped four models from Penny and Phillips (2004) into one model. This reduction is too simplistic in that we need to know about survival and ecological and morphological divergences during the Late Cretaceous, and whether Crown groups of avian or mammalian orders may have existed back into the Cretaceous. More recently (Penny and Phillips 2004) we have formalized hypotheses about dinosaurs and pterosaurs, with the prediction that interactions between mammals (and groundfeeding birds) and dinosaurs would be most likely to affect the smallest dinosaurs, and similarly interactions between birds and pterosaurs would particularly affect the smaller pterosaurs. There is now evidence for both classes of interactions, with the smallest dinosaurs and pterosaurs declining first, as predicted. Thus, testable models are now possible. Mass extinction number six: human impacts. On a broad scale, there is a good correlation between time of human arrival, and increased extinctions (Hurles et al. 2003; Martin 2005; Figure 1). However, it is necessary to distinguish different time scales (Penny 2005) and on a finer scale there are still large numbers of possibilities. In Hurles et al. (2003) we mentioned habitat modification (including the use of Geogenes III July 2006 31 fire), introduced plants and animals (including kiore) in addition to direct predation (the ‘overkill’ hypothesis). We need also to consider prey switching that occurs in early human societies, as evidenced by the results of Wragg (1995) on the middens of different ages on Henderson Island in the Pitcairn group. In addition, the presence of human-wary or humanadapted animals will affect the distribution in the subfossil record. A better understanding of human impacts world-wide, in conjunction with pre-scientific knowledge will make it easier to discuss the issues by removing ‘blame’. While continued spontaneous generation was accepted universally, there was the expectation that animals continued to reappear. New Zealand is one of the very best locations in the world to study many of these issues. Apart from the marine fossil record, some human impact events are extremely recent and the remains less disrupted by time.
Resumo:
A new scaling analysis has been performed for the unsteady natural convection boundary layer under a downward facing inclined plate with uniform heat flux. The development of the thermal or viscous boundary layers may be classified into three distinct stages including an early stage, a transitional stage and a steady stage, which can be clearly identified in the analytical as well as numerical results. Earlier scaling shows that the existing scaling laws of the boundary layer thickness, velocity and steady state time scales for the natural convection flow on a heated plate of uniform heat flux provide a very poor prediction of the Prandtl number dependency. However, those scalings performed very well with Rayleigh number and aspect ratio dependency. In this study, a modifed Prandtl number scaling has been developed using a triple-layer integral approach for Pr > 1. It is seen that in comparison to the direct numerical simulations, the new scaling performs considerably better than the previous scaling.
Resumo:
Over the past two decades, flat-plate particle collections have revealed the presence of a remarkable variety of both terrestrial and extraterrestrial material in the stratosphere [1-6]. The ratio of terrestrial to extraterrestrial material and the nature of material collected may vary over observable time scales. Variations in particle number density can be important since the earth’s atmospheric radiation balance, and therefore the earth’s climate, can be influenced by articulate absorption and scattering of radiation from the sun and earth [7-9]. In order to assess the number density of solid particles in the stratosphere, we have examined a representative fraction of the so1id particles from two flat-plate collection surfaces, whose collection dates are separated in time by 5 years.
Resumo:
Optimal Asset Maintenance decisions are imperative for efficient asset management. Decision Support Systems are often used to help asset managers make maintenance decisions, but high quality decision support must be based on sound decision-making principles. For long-lived assets, a successful Asset Maintenance decision-making process must effectively handle multiple time scales. For example, high-level strategic plans are normally made for periods of years, while daily operational decisions may need to be made within a space of mere minutes. When making strategic decisions, one usually has the luxury of time to explore alternatives, whereas routine operational decisions must often be made with no time for contemplation. In this paper, we present an innovative, flexible decision-making process model which distinguishes meta-level decision making, i.e., deciding how to make decisions, from the information gathering and analysis steps required to make the decisions themselves. The new model can accommodate various decision types. Three industrial case studies are given to demonstrate its applicability.
Resumo:
Reliable pollutant build-up prediction plays a critical role in the accuracy of urban stormwater quality modelling outcomes. However, water quality data collection is resource demanding compared to streamflow data monitoring, where a greater quantity of data is generally available. Consequently, available water quality data sets span only relatively short time scales unlike water quantity data. Therefore, the ability to take due consideration of the variability associated with pollutant processes and natural phenomena is constrained. This in turn gives rise to uncertainty in the modelling outcomes as research has shown that pollutant loadings on catchment surfaces and rainfall within an area can vary considerably over space and time scales. Therefore, the assessment of model uncertainty is an essential element of informed decision making in urban stormwater management. This paper presents the application of a range of regression approaches such as ordinary least squares regression, weighted least squares Regression and Bayesian Weighted Least Squares Regression for the estimation of uncertainty associated with pollutant build-up prediction using limited data sets. The study outcomes confirmed that the use of ordinary least squares regression with fixed model inputs and limited observational data may not provide realistic estimates. The stochastic nature of the dependent and independent variables need to be taken into consideration in pollutant build-up prediction. It was found that the use of the Bayesian approach along with the Monte Carlo simulation technique provides a powerful tool, which attempts to make the best use of the available knowledge in the prediction and thereby presents a practical solution to counteract the limitations which are otherwise imposed on water quality modelling.
Resumo:
Bundle adjustment is one of the essential components of the computer vision toolbox. This paper revisits the resection-intersection approach, which has previously been shown to have inferior convergence properties. Modifications are proposed that greatly improve the performance of this method, resulting in a fast and accurate approach. Firstly, a linear triangulation step is added to the intersection stage, yielding higher accuracy and improved convergence rate. Secondly, the effect of parameter updates is tracked in order to reduce wasteful computation; only variables coupled to significantly changing variables are updated. This leads to significant improvements in computation time, at the cost of a small, controllable increase in error. Loop closures are handled effectively without the need for additional network modelling. The proposed approach is shown experimentally to yield comparable accuracy to a full sparse bundle adjustment (20% error increase) while computation time scales much better with the number of variables. Experiments on a progressive reconstruction system show the proposed method to be more efficient by a factor of 65 to 177, and 4.5 times more accurate (increasing over time) than a localised sparse bundle adjustment approach.
Resumo:
Measuring Earth material behaviour on time scales of millions of years transcends our current capability in the laboratory. We review an alternative path considering multiscale and multiphysics approaches with quantitative structure-property relationships. This approach allows a sound basis to incorporate physical principles such as chemistry, thermodynamics, diffusion and geometry-energy relations into simulations and data assimilation on the vast range of length and time scales encountered in the Earth. We identify key length scales for Earth systems processes and find a substantial scale separation between chemical, hydrous and thermal diffusion. We propose that this allows a simplified two-scale analysis where the outputs from the micro-scale model can be used as inputs for meso-scale simulations, which then in turn becomes the micro-model for the next scale up. We present two fundamental theoretical approaches to link the scales through asymptotic homogenisation from a macroscopic thermodynamic view and percolation renormalisation from a microscopic, statistical mechanics view.
Resumo:
Geoscientists are confronted with the challenge of assessing nonlinear phenomena that result from multiphysics coupling across multiple scales from the quantum level to the scale of the earth and from femtoseconds to the 4.5 Ga of history of our planet. We neglect in this review electromagnetic modelling of the processes in the Earth’s core, and focus on four types of couplings that underpin fundamental instabilities in the Earth. These are thermal (T), hydraulic (H), mechanical (M) and chemical (C) processes which are driven and controlled by the transfer of heat to the Earth’s surface. Instabilities appear as faults, folds, compaction bands, shear/fault zones, plate boundaries and convective patterns. Convective patterns emerge from buoyancy overcoming viscous drag at a critical Rayleigh number. All other processes emerge from non-conservative thermodynamic forces with a critical critical dissipative source term, which can be characterised by the modified Gruntfest number Gr. These dissipative processes reach a quasi-steady state when, at maximum dissipation, THMC diffusion (Fourier, Darcy, Biot, Fick) balance the source term. The emerging steady state dissipative patterns are defined by the respective diffusion length scales. These length scales provide a fundamental thermodynamic yardstick for measuring instabilities in the Earth. The implementation of a fully coupled THMC multiscale theoretical framework into an applied workflow is still in its early stages. This is largely owing to the four fundamentally different lengths of the THMC diffusion yardsticks spanning micro-metre to tens of kilometres compounded by the additional necessity to consider microstructure information in the formulation of enriched continua for THMC feedback simulations (i.e., micro-structure enriched continuum formulation). Another challenge is to consider the important factor time which implies that the geomaterial often is very far away from initial yield and flowing on a time scale that cannot be accessed in the laboratory. This leads to the requirement of adopting a thermodynamic framework in conjunction with flow theories of plasticity. This framework allows, unlike consistency plasticity, the description of both solid mechanical and fluid dynamic instabilities. In the applications we show the similarity of THMC feedback patterns across scales such as brittle and ductile folds and faults. A particular interesting case is discussed in detail, where out of the fluid dynamic solution, ductile compaction bands appear which are akin and can be confused with their brittle siblings. The main difference is that they require the factor time and also a much lower driving forces to emerge. These low stress solutions cannot be obtained on short laboratory time scales and they are therefore much more likely to appear in nature than in the laboratory. We finish with a multiscale description of a seminal structure in the Swiss Alps, the Glarus thrust, which puzzled geologists for more than 100 years. Along the Glarus thrust, a km-scale package of rocks (nappe) has been pushed 40 km over its footwall as a solid rock body. The thrust itself is a m-wide ductile shear zone, while in turn the centre of the thrust shows a mm-cm wide central slip zone experiencing periodic extreme deformation akin to a stick-slip event. The m-wide creeping zone is consistent with the THM feedback length scale of solid mechanics, while the ultralocalised central slip zones is most likely a fluid dynamic instability.
Resumo:
Plasma plumes with exotically segmented channel structure and plasma bullet propagation are produced in atmospheric plasma jets. This is achieved by tailoring interruptions of a continuous DC power supply over the time scales of lifetimes of residual electrons produced by the preceding discharge phase. These phenomena are explained by studying the plasma dynamics using nanosecond-precision imaging. One of the plumes is produced using 2-10μs interruptions in the 8kV DC voltage and features a still bright channel from which a propagating bullet detaches. A shorter interruption of 900ns produces a plume with the additional long conducting dark channel between the jet nozzle and the bright area. The bullet size, formation dynamics, and propagation speed and distance can be effectively controlled. This may lead to micrometer-and nanosecond-precision delivery of quantized plasma bits, warranted for next-generation health, materials, and device technologies.