27 resultados para GEOLOGICAL TIME-SCALES

em Queensland University of Technology - ePrints Archive


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The concept of local accumulation time (LAT) was introduced by Berezhkovskii and coworkers in 2010–2011 to give a finite measure of the time required for the transient solution of a reaction–diffusion equation to approach the steady–state solution (Biophys J. 99, L59 (2010); Phys Rev E. 83, 051906 (2011)). Such a measure is referred to as a critical time. Here, we show that LAT is, in fact, identical to the concept of mean action time (MAT) that was first introduced by McNabb in 1991 (IMA J Appl Math. 47, 193 (1991)). Although McNabb’s initial argument was motivated by considering the mean particle lifetime (MPLT) for a linear death process, he applied the ideas to study diffusion. We extend the work of these authors by deriving expressions for the MAT for a general one–dimensional linear advection–diffusion–reaction problem. Using a combination of continuum and discrete approaches, we show that MAT and MPLT are equivalent for certain uniform–to-uniform transitions; these results provide a practical interpretation for MAT, by directly linking the stochastic microscopic processes to a meaningful macroscopic timescale. We find that for more general transitions, the equivalence between MAT and MPLT does not hold. Unlike other critical time definitions, we show that it is possible to evaluate the MAT without solving the underlying partial differential equation (pde). This makes MAT a simple and attractive quantity for practical situations. Finally, our work explores the accuracy of certain approximations derived using the MAT, showing that useful approximations for nonlinear kinetic processes can be obtained, again without treating the governing pde directly.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A major challenge in studying coupled groundwater and surface-water interactions arises from the considerable difference in the response time scales of groundwater and surface-water systems affected by external forcings. Although coupled models representing the interaction of groundwater and surface-water systems have been studied for over a century, most have focused on groundwater quantity or quality issues rather than response time. In this study, we present an analytical framework, based on the concept of mean action time (MAT), to estimate the time scale required for groundwater systems to respond to changes in surface-water conditions. MAT can be used to estimate the transient response time scale by analyzing the governing mathematical model. This framework does not require any form of transient solution (either numerical or analytical) to the governing equation, yet it provides a closed form mathematical relationship for the response time as a function of the aquifer geometry, boundary conditions, and flow parameters. Our analysis indicates that aquifer systems have three fundamental time scales: (i) a time scale that depends on the intrinsic properties of the aquifer; (ii) a time scale that depends on the intrinsic properties of the boundary condition, and; (iii) a time scale that depends on the properties of the entire system. We discuss two practical scenarios where MAT estimates provide useful insights and we test the MAT predictions using new laboratory-scale experimental data sets.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

When complex projects go wrong they can go horribly wrong with severe financial consequences. We are undertaking research to develop leading performance indicators for complex projects, metrics to provide early warning of potential difficulties. The assessment of success of complex projects can be made by a range of stakeholders over different time scales, against different levels of project results: the project’s outputs at the end of the project; the project’s outcomes in the months following project completion; and the project’s impact in the years following completion. We aim to identify leading performance indicators, which may include both success criteria and success factors, and which can be measured by the project team during project delivery to forecast success as assessed by key stakeholders in the days, months and years following the project. The hope is the leading performance indicators will act as alarm bells to show if a project is diverting from plan so early corrective action can be taken. It may be that different combinations of the leading performance indicators will be appropriate depending on the nature of project complexity. In this paper we develop a new model of project success, whereby success is assessed by different stakeholders over different time frames against different levels of project results. We then relate this to measurements that can be taken during project delivery. A methodology is described to evaluate the early parts of this model. Its implications and limitations are described. This paper describes work in progress.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Long-term changes in the genetic composition of a population occur by the fixation of new mutations, a process known as substitution. The rate at which mutations arise in a population and the rate at which they are fixed are expected to be equal under neutral conditions (Kimura, 1968). Between the appearance of a new mutation and its eventual fate of fixation or loss, there will be a period in which it exists as a transient polymorphism in the population (Kimura and Ohta, 1971). If the majority of mutations are deleterious (and nonlethal), the fixation probabilities of these transient polymorphisms are reduced and the mutation rate will exceed the substitution rate (Kimura, 1983). Consequently, different apparent rates may be observed on different time scales of the molecular evolutionary process (Penny, 2005; Penny and Holmes, 2001). The substitution rate of the mitochondrial protein-coding genes of birds and mammals has been traditionally recognized to be about 0.01 substitutions/site/million years (Myr) (Brown et al., 1979; Ho, 2007; Irwin et al., 1991; Shields and Wilson, 1987), with the noncoding D-loop evolving several times more quickly (e.g., Pesole et al., 1992; Quinn, 1992). Over the past decade, there has been mounting evidence that instantaneous mutation rates substantially exceed substitution rates, in a range of organisms (e.g., Denver et al., 2000; Howell et al., 2003; Lambert et al., 2002; Mao et al., 2006; Mumm et al., 1997; Parsons et al., 1997; Santos et al., 2005). The immediate reaction to the first of these findings was that the polymorphisms generated by the elevated mutation rate are short-lived, perhaps extending back only a few hundred years (Gibbons, 1998; Macaulay et al., 1997). That is, purifying selection was thought to remove these polymorphisms very rapidly.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper discusses findings made during a study of energy use feedback in the home (eco-feedback), well after the novelty has worn off. Contributing towards four important knowledge gaps in the research, we explore eco-feedback over longer time scales, focusing on instances where the feedback was not of lasting benefit to users rather than when it was. Drawing from 23 semi-structured interviews with Australian householders, we found that an initially high level of engagement gave way over time to disinterest, neglect and in certain cases, technical malfunction. Additionally, preconceptions concerned with the “purpose” of the feedback were found to affect use. We propose expanding the scope of enquiry for eco-feedback in several ways, and describe how eco-feedback that better supports decision-making in the “maintenance phase”, i.e. once the initial novelty has worn off, may be key to longer term engagement.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Deep geothermal from the hot crystalline basement has remained an unsolved frontier for the geothermal industry for the past 30 years. This poses the challenge for developing a new unconventional geomechanics approach to stimulate such reservoirs. While a number of new unconventional brittle techniques are still available to improve stimulation on short time scales, the astonishing richness of failure modes of longer time scales in hot rocks has so far been overlooked. These failure modes represent a series of microscopic processes: brittle microfracturing prevails at low temperatures and fairly high deviatoric stresses, while upon increasing temperature and decreasing applied stress or longer time scales, the failure modes switch to transgranular and intergranular creep fractures. Accordingly, fluids play an active role and create their own pathways through facilitating shear localization by a process of time-dependent dissolution and precipitation creep, rather than being a passive constituent by simply following brittle fractures that are generated inside a shear zone caused by other localization mechanisms. We lay out a new theoretical approach for the design of new strategies to utilize, enhance and maintain the natural permeability in the deeper and hotter domain of geothermal reservoirs. The advantage of the approach is that, rather than engineering an entirely new EGS reservoir, we acknowledge a suite of creep-assisted geological processes that are driven by the current tectonic stress field. Such processes are particularly supported by higher temperatures potentially allowing in the future to target commercially viable combinations of temperatures and flow rates.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The technique of photo-CELIV (charge extraction by linearly increasing voltage) is one of the more straightforward and popular approaches to measure the faster carrier mobility in measurement geometries that are relevant for operational solar cells and other optoelectronic devices. It has been used to demonstrate a time-dependent photocarrier mobility in pristine polymers, attributed to energetic relaxation within the density of states. Conversely, in solar cell blends, the presence or absence of such energetic relaxation on transport timescales remains under debate. We developed a complete numerical model and performed photo-CELIV experiments on the model high efficiency organic solar cell blend poly[3,6-dithiophene-2-yl-2,5-di(2-octyldodecyl)-pyrrolo[3,4-c]pyrrole-1,4-dione-alt-naphthalene] (PDPP-TNT):[6,6]-phenyl-C71-butyric-acid-methyl-ester (PC70BM). In the studied solar cells a constant, time-independent mobility on the scale relevant to charge extraction was observed, where thermalisation of photocarriers occurs on time scales much shorter than the transit time. Therefore, photocarrier relaxation effects are insignificant for charge transport in these efficient photovoltaic devices.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In estuaries and natural water channels, the estimate of velocity and dispersion coefficients is critical to the knowledge of scalar transport and mixing. This estimate is rarely available experimentally at sub-tidal time scale in shallow water channels where high frequency is required to capture its spatio-temporal variation. This study estimates Lagrangian integral scales and autocorrelation curves, which are key parameters for obtaining velocity fluctuations and dispersion coefficients, and their spatio-temporal variability from deployments of Lagrangian drifters sampled at 10 Hz for a 4-hour period. The power spectral densities of the velocities between 0.0001 and 0.8 Hz were well fitted with a slope of 5/3 predicted by Kolmogorov’s similarity hypothesis within the inertial subrange, and were similar to the Eulerian power spectral previously observed within the estuary. The result showed that large velocity fluctuations determine the magnitude of the integral time scale, TL. Overlapping of short segments improved the stability of the estimate of TL by taking advantage of the redundant data included in the autocorrelation function. The integral time scales were about 20 s and varied by up to a factor of 8. These results are essential inputs for spatial binning of velocities, Lagrangian stochastic modelling and single particle analysis of the tidal estuary.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

There has been a developing interest in smart grids, the possibility of significantly enhanced performance from remote measurements and intelligent controls. For transmission the use of PMU signals from remote sites and direct load shed controls can give significant enhancement for large system disturbances rather than relying on local measurements and linear controls. This lecture will emphasize what can be found from remote measurements and the mechanisms to get a smarter response to major disturbances. For distribution systems there has been a significant history in the area of distribution reconfiguration automation. This lecture will emphasize the incorporation of Distributed Generation into distribution networks and the impact on voltage/frequency control and protection. Overall the performance of both transmission and distribution will be impacted by demand side management and the capabilities built into the system. In particular, we consider different time scales of load communication and response and look to the benefits for system, energy and lines.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Recent research on particle size distributions and particle concentrations near a busy road cannot be explained by the conventional mechanisms for particle evolution of combustion aerosols. Specifically they appear to be inadequate to explain the experimental observations of particle transformation and the evolution of the total number concentration. This resulted in the development of a new mechanism based on their thermal fragmentation, for the evolution of combustion aerosol nano-particles. A complex and comprehensive pattern of evolution of combustion aerosols, involving particle fragmentation, was then proposed and justified. In that model it was suggested that thermal fragmentation occurs in aggregates of primary particles each of which contains a solid graphite/carbon core surrounded by volatile molecules bonded to the core by strong covalent bonds. Due to the presence of strong covalent bonds between the core and the volatile (frill) molecules, such primary composite particles can be regarded as solid, despite the presence of significant (possibly, dominant) volatile component. Fragmentation occurs when weak van der Waals forces between such primary particles are overcome by their thermal (Brownian) motion. In this work, the accepted concept of thermal fragmentation is advanced to determine whether fragmentation is likely in liquid composite nano-particles. It has been demonstrated that at least at some stages of evolution, combustion aerosols contain a large number of composite liquid particles containing presumably several components such as water, oil, volatile compounds, and minerals. It is possible that such composite liquid particles may also experience thermal fragmentation and thus contribute to, for example, the evolution of the total number concentration as a function of distance from the source. Therefore, the aim of this project is to examine theoretically the possibility of thermal fragmentation of composite liquid nano-particles consisting of immiscible liquid v components. The specific focus is on ternary systems which include two immiscible liquid droplets surrounded by another medium (e.g., air). The analysis shows that three different structures are possible, the complete encapsulation of one liquid by the other, partial encapsulation of the two liquids in a composite particle, and the two droplets separated from each other. The probability of thermal fragmentation of two coagulated liquid droplets is discussed and examined for different volumes of the immiscible fluids in a composite liquid particle and their surface and interfacial tensions through the determination of the Gibbs free energy difference between the coagulated and fragmented states, and comparison of this energy difference with the typical thermal energy kT. The analysis reveals that fragmentation was found to be much more likely for a partially encapsulated particle than a completely encapsulated particle. In particular, it was found that thermal fragmentation was much more likely when the volume ratio of the two liquid droplets that constitute the composite particle are very different. Conversely, when the two liquid droplets are of similar volumes, the probability of thermal fragmentation is small. It is also demonstrated that the Gibbs free energy difference between the coagulated and fragmented states is not the only important factor determining the probability of thermal fragmentation of composite liquid particles. The second essential factor is the actual structure of the composite particle. It is shown that the probability of thermal fragmentation is also strongly dependent on the distance that each of the liquid droplets should travel to reach the fragmented state. In particular, if this distance is larger than the mean free path for the considered droplets in the air, the probability of thermal fragmentation should be negligible. In particular, it follows form here that fragmentation of the composite particle in the state with complete encapsulation is highly unlikely because of the larger distance that the two droplets must travel in order to separate. The analysis of composite liquid particles with the interfacial parameters that are expected in combustion aerosols demonstrates that thermal fragmentation of these vi particles may occur, and this mechanism may play a role in the evolution of combustion aerosols. Conditions for thermal fragmentation to play a significant role (for aerosol particles other than those from motor vehicle exhaust) are determined and examined theoretically. Conditions for spontaneous transformation between the states of composite particles with complete and partial encapsulation are also examined, demonstrating the possibility of such transformation in combustion aerosols. Indeed it was shown that for some typical components found in aerosols that transformation could take place on time scales less than 20 s. The analysis showed that factors that influenced surface and interfacial tension played an important role in this transformation process. It is suggested that such transformation may, for example, result in a delayed evaporation of composite particles with significant water component, leading to observable effects in evolution of combustion aerosols (including possible local humidity maximums near a source, such as a busy road). The obtained results will be important for further development and understanding of aerosol physics and technologies, including combustion aerosols and their evolution near a source.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This manuscript took a 'top down' approach to understanding survival of inhabitant cells in the ecosystem bone, working from higher to lower length and time scales through the hierarchical ecosystem of bone. Our working hypothesis is that nature “engineered” the skeleton using a 'bottom up' approach,where mechanical properties of cells emerge from their adaptation to their local me-chanical milieu. Cell aggregation and formation of higher order anisotropic struc- ture results in emergent architectures through cell differentiation and extracellular matrix secretion. These emergent properties, including mechanical properties and architecture, result in mechanical adaptation at length scales and longer time scales which are most relevant for the survival of the vertebrate organism [Knothe Tate and von Recum 2009]. We are currently using insights from this approach to har-ness nature’s regeneration potential and to engineer novel mechanoactive materials [Knothe Tate et al. 2007, Knothe Tate et al. 2009]. In addition to potential applications of these exciting insights, these studies may provide important clues to evolution and development of vertebrate animals. For instance, one might ask why mesenchymal stem cells condense at all? There is a putative advantage to self-assembly and cooperation, but this advantage is somewhat outweighed by the need for infrastructural complexity (e.g., circulatory systems comprised of specific differentiated cell types which in turn form conduits and pumps to overcome limitations of mass transport via diffusion, for example; dif-fusion is untenable for multicellular organisms larger than 250 microns in diameter. A better question might be: Why do cells build skeletal tissue? Once cooperatingcells in tissues begin to deplete local sources of food in their aquatic environment, those that have evolved a means to locomote likely have an evolutionary advantage. Once the environment becomes less aquarian and more terrestrial, self-assembled organisms with the ability to move on land might have conferred evolutionary ad-vantages as well. So did the cytoskeleton evolve several length scales, enabling the emergence of skeletal architecture for vertebrate animals? Did the evolutionary advantage of motility over noncompliant terrestrial substrates (walking on land) favor adaptations including emergence of intracellular architecture (changes in the cytoskeleton and upregulation of structural protein manufacture), inter-cellular con- densation, mineralization of tissues, and emergence of higher order architectures?How far does evolutionary Darwinism extend and how can we exploit this knowl- edge to engineer smart materials and architectures on Earth and new, exploratory environments?[Knothe Tate et al. 2008]. We are limited only by our ability to imagine. Ultimately, we aim to understand nature, mimic nature, guide nature and/or exploit nature’s engineering paradigms without engineer-ing ourselves out of existence.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Large igneous provinces (LIPs) are sites of the most frequently recurring, largest volume basaltic and silicic eruptions in Earth history. These large-volume (N1000 km3 dense rock equivalent) and large-magnitude (NM8) eruptions produce areally extensive (104–105 km2) basaltic lava flow fields and silicic ignimbrites that are the main building blocks of LIPs. Available information on the largest eruptive units are primarily from the Columbia River and Deccan provinces for the dimensions of flood basalt eruptions, and the Paraná–Etendeka and Afro-Arabian provinces for the silicic ignimbrite eruptions. In addition, three large-volume (675– 2000 km3) silicic lava flows have also been mapped out in the Proterozoic Gawler Range province (Australia), an interpreted LIP remnant. Magma volumes of N1000 km3 have also been emplaced as high-level basaltic and rhyolitic sills in LIPs. The data sets indicate comparable eruption magnitudes between the basaltic and silicic eruptions, but due to considerable volumes residing as co-ignimbrite ash deposits, the current volume constraints for the silicic ignimbrite eruptions may be considerably underestimated. Magma composition thus appears to be no barrier to the volume of magma emitted during an individual eruption. Despite this general similarity in magnitude, flood basaltic and silicic eruptions are very different in terms of eruption style, duration, intensity, vent configuration, and emplacement style. Flood basaltic eruptions are dominantly effusive and Hawaiian–Strombolian in style, with magma discharge rates of ~106–108 kg s−1 and eruption durations estimated at years to tens of years that emplace dominantly compound pahoehoe lava flow fields. Effusive and fissural eruptions have also emplaced some large-volume silicic lavas, but discharge rates are unknown, and may be up to an order of magnitude greater than those of flood basalt lava eruptions for emplacement to be on realistic time scales (b10 years). Most silicic eruptions, however, are moderately to highly explosive, producing co-current pyroclastic fountains (rarely Plinian) with discharge rates of 109– 1011 kg s−1 that emplace welded to rheomorphic ignimbrites. At present, durations for the large-magnitude silicic eruptions are unconstrained; at discharge rates of 109 kg s−1, equivalent to the peak of the 1991 Mt Pinatubo eruption, the largest silicic eruptions would take many months to evacuate N5000 km3 of magma. The generally simple deposit structure is more suggestive of short-duration (hours to days) and high intensity (~1011 kg s−1) eruptions, perhaps with hiatuses in some cases. These extreme discharge rates would be facilitated by multiple point, fissure and/or ring fracture venting of magma. Eruption frequencies are much elevated for large-magnitude eruptions of both magma types during LIP-forming episodes. However, in basaltdominated provinces (continental and ocean basin flood basalt provinces, oceanic plateaus, volcanic rifted margins), large magnitude (NM8) basaltic eruptions have much shorter recurrence intervals of 103–104 years, whereas similar magnitude silicic eruptions may have recurrence intervals of up to 105 years. The Paraná– Etendeka province was the site of at least nine NM8 silicic eruptions over an ~1 Myr period at ~132 Ma; a similar eruption frequency, although with a fewer number of silicic eruptions is also observed for the Afro- Arabian Province. The huge volumes of basaltic and silicic magma erupted in quick succession during LIP events raises several unresolved issues in terms of locus of magma generation and storage (if any) in the crust prior to eruption, and paths and rates of ascent from magma reservoirs to the surface.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The low stream salinity naturally in the Nebine-Mungallala Catchment, extent of vegetation retention, relatively low rainfall and high evaporation indicates that there is a relatively low risk of rising shallow groundwater tables in the catchment. Scalding caused by wind and water erosion exposing highly saline sub-soils is a more important regional issue, such as in the Homeboin area. Local salinisation associated with evaporation of bore water from free flowing bore drains and bores is also an important land degradation issue particularly in the lower Nebine, Wallam and Mungallala Creeks. The replacement of free flowing artesian bores and bore drains with capped bores and piped water systems under the Great Artesian Basin bore rehabilitation program is addressing local salinisation and scalding in the vicinity of bore drains and preventing the discharge of saline bore water to streams. Three principles for the prevention and control of salinity in the Nebine Mungallala catchment have been identified in this review: • Avoid salinity through avoiding scalds – i.e. not exposing the near-surface salt in landscape through land degradation; • Riparian zone management: Scalding often occurs within 200m or so of watering lines. Natural drainage lines are most likely to be overstocked, and thus have potential for scalding. Scalding begins when vegetation is removed, and without that binding cover, wind and water erosion exposes the subsoil; and • Monitoring of exposed or grazed soil areas. Based on the findings of the study, we make the following recommendations: 1. Undertake a geotechnical study of existing maps and other data to help identify and target areas most at risk of rising water tables causing salinity. Selected monitoring should then be established using piezometers as an early warning system. 2. SW NRM should financially support scald reclamation activity through its various funding programs. However, for this to have any validity in the overall management of salinity risk, it is critical that such funding require the landholder to undertake a salinity hazard/risk assessment of his/her holding. 3. A staged approach to funding may be appropriate. In the first instance, it would be reasonable to commence funding some pilot scald reclamation work with a view to further developing and piloting the farm hazard/risk assessment tools, and exploring how subsequent grazing management strategies could be incorporated within other extension and management activities. Once the details of the necessary farm level activities have been more clearly defined, and following the outcomes of the geotechnical review recommended above, a more comprehensive funding package could be rolled out to priority areas. 4. We recommend that best-practice grazing management training currently on offer should be enhanced with information about salinity risk in scald-prone areas, and ways of minimising the likelihood of scald formation. 5. We recommend that course material be developed for local students in Years 6 and 7, and that arrangements be made with local schools to present this information. Given the constraints of existing syllabi, we envisage that negotiations may have to be undertaken with the Department of Education in order for this material to be permitted to be used. We have contact with key people who could help in this if required. 6. We recommend that SW NRM continue to support existing extension activities such as Grazing Land Management and the Monitoring Made Easy tools. These aids should be able to be easily expanding to incorporate techniques for monitoring, addressing and preventing salinity and scalding. At the time of writing staff of SW NRM were actively involved in this process. It is important that these activities are adequately resourced to facilitate the uptake by landholders of the perception that salinity is an issue that needs to be addressed as part of everyday management. 7. We recommend that SW NRM consider investing in the development and deployment of a scenario-modelling learning support tool as part of the awareness raising and education activities. Secondary salinity is a dynamic process that results from ongoing human activity which mobilises and/or exposes salt occurring naturally in the landscape. Time scales can be short to very long, and the benefits of management actions can similarly have immediate or very long time frames. One way to help explain the dynamics of these processes is through scenario modelling.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Recently the application of the quasi-steady-state approximation (QSSA) to the stochastic simulation algorithm (SSA) was suggested for the purpose of speeding up stochastic simulations of chemical systems that involve both relatively fast and slow chemical reactions [Rao and Arkin, J. Chem. Phys. 118, 4999 (2003)] and further work has led to the nested and slow-scale SSA. Improved numerical efficiency is obtained by respecting the vastly different time scales characterizing the system and then by advancing only the slow reactions exactly, based on a suitable approximation to the fast reactions. We considerably extend these works by applying the QSSA to numerical methods for the direct solution of the chemical master equation (CME) and, in particular, to the finite state projection algorithm [Munsky and Khammash, J. Chem. Phys. 124, 044104 (2006)], in conjunction with Krylov methods. In addition, we point out some important connections to the literature on the (deterministic) total QSSA (tQSSA) and place the stochastic analogue of the QSSA within the more general framework of aggregation of Markov processes. We demonstrate the new methods on four examples: Michaelis–Menten enzyme kinetics, double phosphorylation, the Goldbeter–Koshland switch, and the mitogen activated protein kinase cascade. Overall, we report dramatic improvements by applying the tQSSA to the CME solver.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The opening phrase of the title is from Charles Darwin’s notebooks (Schweber 1977). It is a double reminder, firstly that mainstream evolutionary theory is not just about describing nature but is particularly looking for mechanisms or ‘causes’, and secondly, that there will usually be several causes affecting any particular outcome. The second part of the title is our concern at the almost universal rejection of the idea that biological mechanisms are sufficient for macroevolutionary changes, thus rejecting a cornerstone of Darwinian evolutionary theory. Our primary aim here is to consider ways of making it easier to develop and to test hypotheses about evolution. Formalizing hypotheses can help generate tests. In an absolute sense, some of the discussion by scientists about evolution is little better than the lack of reasoning used by those advocating intelligent design. Our discussion here is in a Popperian framework where science is defined by that area of study where it is possible, in principle, to find evidence against hypotheses – they are in principle falsifiable. However, with time, the boundaries of science keep expanding. In the past, some aspects of evolution were outside the current boundaries of falsifiable science, but increasingly new techniques and ideas are expanding the boundaries of science and it is appropriate to re-examine some topics. It often appears that over the last few decades there has been an increasingly strong assumption to look first (and only) for a physical cause. This decision is virtually never formally discussed, just an assumption is made that some physical factor ‘drives’ evolution. It is necessary to examine our assumptions much more carefully. What is meant by physical factors ‘driving’ evolution, or what is an ‘explosive radiation’. Our discussion focuses on two of the six mass extinctions, the fifth being events in the Late Cretaceous, and the sixth starting at least 50,000 years ago (and is ongoing). Cretaceous/Tertiary boundary; the rise of birds and mammals. We have had a long-term interest (Cooper and Penny 1997) in designing tests to help evaluate whether the processes of microevolution are sufficient to explain macroevolution. The real challenge is to formulate hypotheses in a testable way. For example the numbers of lineages of birds and mammals that survive from the Cretaceous to the present is one test. Our first estimate was 22 for birds, and current work is tending to increase this value. This still does not consider lineages that survived into the Tertiary, and then went extinct later. Our initial suggestion was probably too narrow in that it lumped four models from Penny and Phillips (2004) into one model. This reduction is too simplistic in that we need to know about survival and ecological and morphological divergences during the Late Cretaceous, and whether Crown groups of avian or mammalian orders may have existed back into the Cretaceous. More recently (Penny and Phillips 2004) we have formalized hypotheses about dinosaurs and pterosaurs, with the prediction that interactions between mammals (and groundfeeding birds) and dinosaurs would be most likely to affect the smallest dinosaurs, and similarly interactions between birds and pterosaurs would particularly affect the smaller pterosaurs. There is now evidence for both classes of interactions, with the smallest dinosaurs and pterosaurs declining first, as predicted. Thus, testable models are now possible. Mass extinction number six: human impacts. On a broad scale, there is a good correlation between time of human arrival, and increased extinctions (Hurles et al. 2003; Martin 2005; Figure 1). However, it is necessary to distinguish different time scales (Penny 2005) and on a finer scale there are still large numbers of possibilities. In Hurles et al. (2003) we mentioned habitat modification (including the use of Geogenes III July 2006 31 fire), introduced plants and animals (including kiore) in addition to direct predation (the ‘overkill’ hypothesis). We need also to consider prey switching that occurs in early human societies, as evidenced by the results of Wragg (1995) on the middens of different ages on Henderson Island in the Pitcairn group. In addition, the presence of human-wary or humanadapted animals will affect the distribution in the subfossil record. A better understanding of human impacts world-wide, in conjunction with pre-scientific knowledge will make it easier to discuss the issues by removing ‘blame’. While continued spontaneous generation was accepted universally, there was the expectation that animals continued to reappear. New Zealand is one of the very best locations in the world to study many of these issues. Apart from the marine fossil record, some human impact events are extremely recent and the remains less disrupted by time.