27 resultados para Time Scale
em Queensland University of Technology - ePrints Archive
Resumo:
Background: Malaria is a major public health burden in the tropics with the potential to significantly increase in response to climate change. Analyses of data from the recent past can elucidate how short-term variations in weather factors affect malaria transmission. This study explored the impact of climate variability on the transmission of malaria in the tropical rain forest area of Mengla County, south-west China. Methods: Ecological time-series analysis was performed on data collected between 1971 and 1999. Auto-regressive integrated moving average (ARIMA) models were used to evaluate the relationship between weather factors and malaria incidence. Results: At the time scale of months, the predictors for malaria incidence included: minimum temperature, maximum temperature, and fog day frequency. The effect of minimum temperature on malaria incidence was greater in the cool months than in the hot months. The fog day frequency in October had a positive effect on malaria incidence in May of the following year. At the time scale of years, the annual fog day frequency was the only weather predictor of the annual incidence of malaria. Conclusion: Fog day frequency was for the first time found to be a predictor of malaria incidence in a rain forest area. The one-year delayed effect of fog on malaria transmission may involve providing water input and maintaining aquatic breeding sites for mosquitoes in vulnerable times when there is little rainfall in the 6-month dry seasons. These findings should be considered in the prediction of future patterns of malaria for similar tropical rain forest areas worldwide.
Resumo:
Time series regression models were used to examine the influence of environmental factors (soil water content and soil temperature) on the emissions of nitrous oxide (N2O) from subtropical soils, by taking into account temporal lagged environmental factors, autoregressive processes, and seasonality for three horticultural crops in a subtropical region of Australia. Fluxes of N2O, soil water content, and soil temperature were determined simultaneously on a weekly basis over a 12-month period in South East Queensland. Annual N2O emissions for soils under mango, pineapple, and custard apple were 1590, 1156, and 2038 g N2O-N/ha, respectively, with most emissions attributed to nitrification. The N2O-N emitted from the pineapple and custard apple crops was equivalent to 0.26 and 2.22%, respectively, of the applied mineral N. The change in soil water content was the key variable for describing N2O emissions at the weekly time-scale, with soil temperature at a lag of 1 month having a significant influence on average N2O emissions (averaged) at the monthly time-scale across the three crops. After accounting for soil temperature and soil water content, both the weekly and monthly time series regression models exhibited significant autocorrelation at lags of 1–2 weeks and 1–2 months, and significant seasonality for weekly N2O emissions for mango crop and for monthly N2O emissions for mango and custard apple crops in this location over this time-frame. Time series regression models can explain a higher percentage of the temporal variation of N2O emission compared with simple regression models using soil temperature and soil water content as drivers. Taking into account seasonal variability and temporal persistence in N2O emissions associated with soil water content and soil temperature may lead to a reduction in the uncertainty surrounding estimates of N2O emissions based on limited sampling effort.
Resumo:
A major challenge in studying coupled groundwater and surface-water interactions arises from the considerable difference in the response time scales of groundwater and surface-water systems affected by external forcings. Although coupled models representing the interaction of groundwater and surface-water systems have been studied for over a century, most have focused on groundwater quantity or quality issues rather than response time. In this study, we present an analytical framework, based on the concept of mean action time (MAT), to estimate the time scale required for groundwater systems to respond to changes in surface-water conditions. MAT can be used to estimate the transient response time scale by analyzing the governing mathematical model. This framework does not require any form of transient solution (either numerical or analytical) to the governing equation, yet it provides a closed form mathematical relationship for the response time as a function of the aquifer geometry, boundary conditions, and flow parameters. Our analysis indicates that aquifer systems have three fundamental time scales: (i) a time scale that depends on the intrinsic properties of the aquifer; (ii) a time scale that depends on the intrinsic properties of the boundary condition, and; (iii) a time scale that depends on the properties of the entire system. We discuss two practical scenarios where MAT estimates provide useful insights and we test the MAT predictions using new laboratory-scale experimental data sets.
Resumo:
The multifractal properties of daily rainfall time series at the stations in Pearl River basin of China over periods of up to 45 years are examined using the universal multifractal approach based on the multiplicative cascade model and the multifractal detrended fluctuation analysis (MF-DFA). The results from these two kinds of multifractal analyses show that the daily rainfall time series in this basin have multifractal behavior in two different time scale ranges. It is found that the empirical multifractal moment function K(q)K(q) of the daily rainfall time series can be fitted very well by the universal multifractal model (UMM). The estimated values of the conservation parameter HH from UMM for these daily rainfall data are close to zero indicating that they correspond to conserved fields. After removing the seasonal trend in the rainfall data, the estimated values of the exponent h(2)h(2) from MF-DFA indicate that the daily rainfall time series in Pearl River basin exhibit no long-term correlations. It is also found that K(2)K(2) and elevation series are negatively correlated. It shows a relationship between topography and rainfall variability.
Resumo:
Design as seen from the designer's perspective is a series of amazing imaginative jumps or creative leaps. But design as seen by the design historian is a smooth progression or evolution of ideas that they seem self-evident and inevitable after the event. But the next step is anything but obvious for the artist/creator/inventor/designer stuck at that point just before the creative leap. They know where they have come from and have a general sense of where they are going, but often do not have a precise target or goal. This is why it is misleading to talk of design as a problem-solving activity - it is better defined as a problem-finding activity. This has been very frustrating for those trying to assist the design process with computer-based, problem-solving techniques. By the time the problem has been defined, it has been solved. Indeed the solution is often the very definition of the problem. Design must be creative-or it is mere imitation. But since this crucial creative leap seem inevitable after the event, the question must arise, can we find some way of searching the space ahead? Of course there are serious problems of knowing what we are looking for and the vastness of the search space. It may be better to discard altogether the term "searching" in the context of the design process: Conceptual analogies such as search, search spaces and fitness landscapes aim to elucidate the design process. However, the vastness of the multidimensional spaces involved make these analogies misguided and they thereby actually result in further confounding the issue. The term search becomes a misnomer since it has connotations that imply that it is possible to find what you are looking for. In such vast spaces the term search must be discarded. Thus, any attempt at searching for the highest peak in the fitness landscape as an optimal solution is also meaningless. Futhermore, even the very existence of a fitness landscape is fallacious. Although alternatives in the same region of the vast space can be compared to one another, distant alternatives will stem from radically different roots and will therefore not be comparable in any straightforward manner (Janssen 2000). Nevertheless we still have this tantalizing possibility that if a creative idea seems inevitable after the event, then somehow might the process be rserved? This may be as improbable as attempting to reverse time. A more helpful analogy is from nature, where it is generally assumed that the process of evolution is not long-term goal directed or teleological. Dennett points out a common minsunderstanding of Darwinism: the idea that evolution by natural selection is a procedure for producing human beings. Evolution can have produced humankind by an algorithmic process, without its being true that evolution is an algorithm for producing us. If we were to wind the tape of life back and run this algorithm again, the likelihood of "us" being created again is infinitesimally small (Gould 1989; Dennett 1995). But nevertheless Mother Nature has proved a remarkably successful, resourceful, and imaginative inventor generating a constant flow of incredible new design ideas to fire our imagination. Hence the current interest in the potential of the evolutionary paradigm in design. These evolutionary methods are frequently based on techniques such as the application of evolutionary algorithms that are usually thought of as search algorithms. It is necessary to abandon such connections with searching and see the evolutionary algorithm as a direct analogy with the evolutionary processes of nature. The process of natural selection can generate a wealth of alternative experiements, and the better ones survive. There is no one solution, there is no optimal solution, but there is continuous experiment. Nature is profligate with her prototyping and ruthless in her elimination of less successful experiments. Most importantly, nature has all the time in the world. As designers we cannot afford prototyping and ruthless experiment, nor can we operate on the time scale of the natural design process. Instead we can use the computer to compress space and time and to perform virtual prototyping and evaluation before committing ourselves to actual prototypes. This is the hypothesis underlying the evolutionary paradigm in design (1992, 1995).
Resumo:
In this thesis an investigation into theoretical models for formation and interaction of nanoparticles is presented. The work presented includes a literature review of current models followed by a series of five chapters of original research. This thesis has been submitted in partial fulfilment of the requirements for the degree of doctor of philosophy by publication and therefore each of the five chapters consist of a peer-reviewed journal article. The thesis is then concluded with a discussion of what has been achieved during the PhD candidature, the potential applications for this research and ways in which the research could be extended in the future. In this thesis we explore stochastic models pertaining to the interaction and evolution mechanisms of nanoparticles. In particular, we explore in depth the stochastic evaporation of molecules due to thermal activation and its ultimate effect on nanoparticles sizes and concentrations. Secondly, we analyse the thermal vibrations of nanoparticles suspended in a fluid and subject to standing oscillating drag forces (as would occur in a standing sound wave) and finally on lattice surfaces in the presence of high heat gradients. We have described in this thesis a number of new models for the description of multicompartment networks joined by a multiple, stochastically evaporating, links. The primary motivation for this work is in the description of thermal fragmentation in which multiple molecules holding parts of a carbonaceous nanoparticle may evaporate. Ultimately, these models predict the rate at which the network or aggregate fragments into smaller networks/aggregates and with what aggregate size distribution. The models are highly analytic and describe the fragmentation of a link holding multiple bonds using Markov processes that best describe different physical situations and these processes have been analysed using a number of mathematical methods. The fragmentation of the network/aggregate is then predicted using combinatorial arguments. Whilst there is some scepticism in the scientific community pertaining to the proposed mechanism of thermal fragmentation,we have presented compelling evidence in this thesis supporting the currently proposed mechanism and shown that our models can accurately match experimental results. This was achieved using a realistic simulation of the fragmentation of the fractal carbonaceous aggregate structure using our models. Furthermore, in this thesis a method of manipulation using acoustic standing waves is investigated. In our investigation we analysed the effect of frequency and particle size on the ability for the particle to be manipulated by means of a standing acoustic wave. In our results, we report the existence of a critical frequency for a particular particle size. This frequency is inversely proportional to the Stokes time of the particle in the fluid. We also find that for large frequencies the subtle Brownian motion of even larger particles plays a significant role in the efficacy of the manipulation. This is due to the decreasing size of the boundary layer between acoustic nodes. Our model utilises a multiple time scale approach to calculating the long term effects of the standing acoustic field on the particles that are interacting with the sound. These effects are then combined with the effects of Brownian motion in order to obtain a complete mathematical description of the particle dynamics in such acoustic fields. Finally, in this thesis, we develop a numerical routine for the description of "thermal tweezers". Currently, the technique of thermal tweezers is predominantly theoretical however there has been a handful of successful experiments which demonstrate the effect it practise. Thermal tweezers is the name given to the way in which particles can be easily manipulated on a lattice surface by careful selection of a heat distribution over the surface. Typically, the theoretical simulations of the effect can be rather time consuming with supercomputer facilities processing data over days or even weeks. Our alternative numerical method for the simulation of particle distributions pertaining to the thermal tweezers effect use the Fokker-Planck equation to derive a quick numerical method for the calculation of the effective diffusion constant as a result of the lattice and the temperature. We then use this diffusion constant and solve the diffusion equation numerically using the finite volume method. This saves the algorithm from calculating many individual particle trajectories since it is describes the flow of the probability distribution of particles in a continuous manner. The alternative method that is outlined in this thesis can produce a larger quantity of accurate results on a household PC in a matter of hours which is much better than was previously achieveable.
Resumo:
Principal Topic : Nascent entrepreneurship has drawn the attention of scholars in the last few years (Davidsson, 2006, Wagner, 2004). However, most studies have asked why firms are created focussing on questions such as what are the characteristics (Delmar and Davidsson, 2000) and motivations (Carter, Gartner, Shaver & Reynolds, 2004) of nascent entrepreneurs, or what are the success factors in venture creation (Davidsson & Honig; 2003; Delmar and Shane, 2004). In contrast, the question of how companies emerge is still in its infancy. On a theoretical side, effectuation, developed by Sarasvathy (2001) offers one view of the strategies that may be at work during the venture creation process. Causation, the theorized inverse to effectuation, may be described as a rational reasoning method to create a company. After a comprehensive market analysis to discover opportunities, the entrepreneur will select the alternative with the higher expected return and implement it through the use of a business plan. In contrast, effectuation suggests that the future entrepreneur will develop her new venture in a more iterative way by selecting possibilities through flexibility and interaction with the market, affordability of loss of resources and time invested, development of pre-commitments and alliances from stakeholders. Another contrasting point is that causation is ''goal driven'' while an effectual approach is ''mean driven'' (Sarasvathy, 2001) One of the predictions of effectuation theory is effectuation is more likely to be used by entrepreneurs early in the venture creation process (Sarasvathy, 2001). However, this temporal aspect and the impact of the effectuation strategy on the venture outcomes has so far not been systematically and empirically tested on large samples. The reason behind this research gap is twofold. Firstly, few studies collect longitudinal data on emerging ventures at an early enough stage of development to avoid severe survivor bias. Second, the studies that collect such data have not included validated measures of effectuation. The research we are conducting attempts to partially fill this gap by combining an empirical investigation on a large sample of nascent and young firms with the effectuation/causation continuum as a basis (Sarasvathy, 2001). The objectives are to understand the strategies used by the firms during the creation process and measure their impacts on the firm outcomes. Methodology/Key Propositions : This study draws its data from the first wave of the CAUSEE project where 28,383 Australian households were randomly contacted by phone using a specific methodology to capture emerging firms (Davidsson, Steffens, Gordon, Reynolds, 2008). This screening led to the identification of 594 nascent ventures (i.e., firms that are not operating yet) and 514 young firms (i.e., firms that have started operating from 2004) that were willing to participate in the study. Comprehensive phone interviews were conducted with these 1108 ventures. In a likewise comprehensive follow-up 12 months later, 80% of the eligible cases completed the interview. The questionnaire contains specific sections designed to distinguish effectual and causal processes, innovation, gestation activities, business idea changes and ventures outcomes. The effectuation questions are based on the components of effectuation strategy as described by Sarasvathy (2001) namely: flexibility, affordable loss and pre-commitment from stakeholders. Results from two rounds of pre-testing informed the design of the instrument included in the main survey. The first two waves of data have will be used to test and compare the use of effectuation in the venture creation process. To increase the robustness of the results, temporal use of effectuation will be tested both directly and indirectly. 1. By comparing the use of effectuation in nascent and young firms from wave 1 to 2, we will be able to find out how effectuation is affected by time over a 12-month duration and if the stage of venture development has an impact on its use. 2. By comparing nascent ventures early in the creation process versus nascent ventures late in the creation process. Early versus late can be determined with the help of time-stamped gestation activity questions included in the survey. This will help us to determine the change on a small time scale during the creation phase of the venture. 3. By comparing nascent firms to young (already operational) firms. 4. By comparing young firms becoming operational in 2006 with those first becoming operational in 2004. Results and Implications : Wave 1 and 2 data have been completed and wave 2 is currently being checked and 'cleaned'. Analysis work will commence in September, 2009. This paper is expected to contribute to the body of knowledge on effectuation by measuring quantitatively its use and impact on nascent and young firms activities at different stages of their development. In addition, this study will also increase the understanding of the venture creation process by comparing over time nascent and young firms from a large sample of randomly selected ventures. We acknowledge the results from this study will be preliminary and will have to be interpreted with caution as the changes identified may be due to several factors and may not only be attributed to the use/not use of effectuation. Meanwhile, we believe that this study is important to the field of entrepreneurship as it provides some much needed insights on the processes used by nascent and young firms during their creation and early operating stages.
Resumo:
This paper investigates the control of a HVDC link, fed from an AC source through a controlled rectifier and feeding an AC line through a controlled inverter. The overall objective is to maintain maximum possible link voltage at the inverter while regulating the link current. In this paper the practical feedback design issues are investigated with a view of obtaining simple, robust designs that are easy to evaluate for safety and operability. The investigations are applicable to back-to-back links used for frequency decoupling and to long DC lines. The design issues discussed include: (i) a review of overall system dynamics to establish the time scale of different feedback loops and to highlight feedback design issues; (ii) the concept of using the inverter firing angle control to regulate link current when the rectifier firing angle controller saturates; and (iii) the design issues for the individual controllers including robust design for varying line conditions and the trade-off between controller complexity and the reduction of nonlinearity and disturbance effects
Resumo:
Drivers' ability to react to unpredictable events deteriorates when exposed to highly predictable and uneventful driving tasks. Highway design reduces the driving task mainly to a lane-keeping manoeuvre. Such a task is monotonous, providing little stimulation and this contributes to crashes due to inattention. Research has shown that driver's hypovigilance can be assessed with EEG measurements and that driving performance is impaired during prolonged monotonous driving tasks. This paper aims to show that two dimensions of monotony - namely road design and road side variability - decrease vigilance and impair driving performance. This is the first study correlating hypovigilance and driver performance in varied monotonous conditions, particularly on a short time scale (a few seconds). We induced vigilance decrement as assessed with an EEG during a monotonous driving simulator experiment. Road monotony was varied through both road design and road side variability. The driver's decrease in vigilance occurred due to both road design and road scenery monotony and almost independently of the driver's sensation seeking level. Such impairment was also correlated to observable measurements from the driver, the car and the environment. During periods of hypovigilance, the driving performance impairment affected lane positioning, time to lane crossing, blink frequency, heart rate variability and non-specific electrodermal response rates. This work lays the foundation for the development of an in-vehicle device preventing hypovigilance crashes on monotonous roads.
Resumo:
In this study, a discussion of the fluid dynamics in the attic space is reported, focusing on its transient response to sudden and linear changes of temperature along the two inclined walls. The transient behaviour of an attic space is relevant to our daily life. The instantaneous and non-instantaneous (ramp) heating boundary condition is applied on the sloping walls of the attic space. A theoretical understanding of the transient behaviour of the flow in the enclosure is performed through scaling analysis. A proper identification of the timescales, the velocity and the thickness relevant to the flow that develops inside the cavity makes it possible to predict theoretically the basic flow features that will survive once the thermal flow in the enclosure reaches a steady state. A time scale for the heating-up of the whole cavity together with the heat transfer scales through the inclined walls has also been obtained through scaling analysis. All scales are verified by the numerical simulations.
Resumo:
A fundamental study of the fluid dynamics inside an attic shaped triangular enclosure with cold upper walls and adiabatic horizontal bottom wall is reported in this study. The transient behaviour of the attic fluid which is relevant to our daily life is examined based on a scaling analysis. The transient phenomenon begins with the instantaneous cooling and the cooling with linear decreases of temperature up to some specific time (ramp time) and then maintain constant of the upper sloped walls. It is shown that both inclined walls develop a thermal boundary layer whose thicknesses increase towards steady-state or quasi-steady values. A proper identification of the timescales, the velocity and the thickness relevant to the flow that develops inside the cavity makes it possible to predict theoretically the basic flow features that will survive once the thermal flow in the enclosure reaches a steady state. A time scale for the cooling-down of the whole cavity together with the heat transfer scales through the inclined walls has also been obtained through scaling analysis. All scales are verified by the numerical simulations.
Resumo:
A new scaling analysis has been performed for the unsteady natural convection boundary layer under a downward facing inclined plate with uniform heat flux. The development of the thermal or viscous boundary layers may be classified into three distinct stages including a start-up stage, a transitional stage and a steady stage, which can be clearly identified in the analytical as well as numerical results. Earlier scaling shows that the existing scaling laws of the boundary layer thickness, velocity and steady state time scale for the natural convection flow on a heated plate of uniform heat flux provide a very poor prediction of the Prandtl number dependency of the flow. However, those scalings performed very well with Rayleigh number and aspect ratio dependency. In this study, a new Prandtl number scaling has been developed using a triple-layer integral approach for Pr > 1. It is seen that in comparison to the direct numerical simulations, the new scaling performs considerably better than the previous scaling.
Resumo:
An improved scaling analysis and direct numerical simulations are performed for the unsteady natural convection boundary layer adjacent to a downward facing inclined plate with uniform heat flux. The development of the thermal or viscous boundary layers may be classified into three distinct stages: a start-up stage, a transitional stage and a steady stage, which can be clearly identified in the analytical as well as the numerical results. Previous scaling shows that the existing scaling laws of the boundary layer thickness, velocity and steady state time scale for the natural convection flow on a heated plate of uniform heat flux provide a very poor prediction of the Prandtl number dependency of the flow. However, those scalings perform very well with Rayleigh number and aspect ratio dependency. In this study, a modified Prandtl number scaling is developed using a triple layer integral approach for Pr > 1. It is seen that in comparison to the direct numerical simulations, the modified scaling performs considerably better than the previous scaling.
Resumo:
Despite recent methodological advances in inferring the time-scale of biological evolution from molecular data, the fundamental question of whether our substitution models are sufficiently well specified to accurately estimate branch-lengths has received little attention. I examine this implicit assumption of all molecular dating methods, on a vertebrate mitochondrial protein-coding dataset. Comparison with analyses in which the data are RY-coded (AG → R; CT → Y) suggests that even rates-across-sites maximum likelihood greatly under-compensates for multiple substitutions among the standard (ACGT) NT-coded data, which has been subject to greater phylogenetic signal erosion. Accordingly, the fossil record indicates that branch-lengths inferred from the NT-coded data translate into divergence time overestimates when calibrated from deeper in the tree. Intriguingly, RY-coding led to the opposite result. The underlying NT and RY substitution model misspecifications likely relate respectively to “hidden” rate heterogeneity and changes in substitution processes across the tree, for which I provide simulated examples. Given the magnitude of the inferred molecular dating errors, branch-length estimation biases may partly explain current conflicts with some palaeontological dating estimates.