806 resultados para Completion time minimization
Resumo:
The article discusses evidence that time prevented many students from showing what they could do in the 2010 Year 7 and 9 NAPLAN numeracy tests. In addition to analysing the available data, the article discusses some NAPLAN numeracy questions that contribute to this problem. It is suggested that schools should investigate whether time limitation is a problem for their own students. The article discusses the implications of these findings for teachers preparing students for NAPLAN tests and for the developers of the tests.
Resumo:
Precise identification of the time when a change in a hospital outcome has occurred enables clinical experts to search for a potential special cause more effectively. In this paper, we develop change point estimation methods for survival time of a clinical procedure in the presence of patient mix in a Bayesian framework. We apply Bayesian hierarchical models to formulate the change point where there exists a step change in the mean survival time of patients who underwent cardiac surgery. The data are right censored since the monitoring is conducted over a limited follow-up period. We capture the effect of risk factors prior to the surgery using a Weibull accelerated failure time regression model. Markov Chain Monte Carlo is used to obtain posterior distributions of the change point parameters including location and magnitude of changes and also corresponding probabilistic intervals and inferences. The performance of the Bayesian estimator is investigated through simulations and the result shows that precise estimates can be obtained when they are used in conjunction with the risk-adjusted survival time CUSUM control charts for different magnitude scenarios. The proposed estimator shows a better performance where a longer follow-up period, censoring time, is applied. In comparison with the alternative built-in CUSUM estimator, more accurate and precise estimates are obtained by the Bayesian estimator. These superiorities are enhanced when probability quantification, flexibility and generalizability of the Bayesian change point detection model are also considered.
Resumo:
The concept of local accumulation time (LAT) was introduced by Berezhkovskii and coworkers in 2010–2011 to give a finite measure of the time required for the transient solution of a reaction–diffusion equation to approach the steady–state solution (Biophys J. 99, L59 (2010); Phys Rev E. 83, 051906 (2011)). Such a measure is referred to as a critical time. Here, we show that LAT is, in fact, identical to the concept of mean action time (MAT) that was first introduced by McNabb in 1991 (IMA J Appl Math. 47, 193 (1991)). Although McNabb’s initial argument was motivated by considering the mean particle lifetime (MPLT) for a linear death process, he applied the ideas to study diffusion. We extend the work of these authors by deriving expressions for the MAT for a general one–dimensional linear advection–diffusion–reaction problem. Using a combination of continuum and discrete approaches, we show that MAT and MPLT are equivalent for certain uniform–to-uniform transitions; these results provide a practical interpretation for MAT, by directly linking the stochastic microscopic processes to a meaningful macroscopic timescale. We find that for more general transitions, the equivalence between MAT and MPLT does not hold. Unlike other critical time definitions, we show that it is possible to evaluate the MAT without solving the underlying partial differential equation (pde). This makes MAT a simple and attractive quantity for practical situations. Finally, our work explores the accuracy of certain approximations derived using the MAT, showing that useful approximations for nonlinear kinetic processes can be obtained, again without treating the governing pde directly.
Resumo:
Driver response (reaction) time (tr) of the second queuing vehicle is generally longer than other vehicles at signalized intersections. Though this phenomenon was revealed in 1972, the above factor is still ignored in conventional departure models. This paper highlights the need for quantitative measurements and analysis of queuing vehicle performance in spontaneous discharge pattern because it can improve microsimulation. Video recording from major cities in Australia plus twenty two sets of vehicle trajectories extracted from the Next Generation Simulation (NGSIM) Peachtree Street Dataset have been analyzed to better understand queuing vehicle performance in the discharge process. Findings from this research will alleviate driver response time and also can be used for the calibration of the microscopic traffic simulation model.
Resumo:
This paper presents the benefits and issues related to travel time prediction on urban network. Travel time information quantifies congestion and is perhaps the most important network performance measure. Travel time prediction has been an active area of research for the last five decades. The activities related to ITS have increased the attention of researchers for better and accurate real-time prediction of travel time. Majority of the literature on travel time prediction is applicable to freeways where, under non-incident conditions, traffic flow is not affected by external factors such as traffic control signals and opposing traffic flows. On urban environment the problem is more complicated due to conflicting areas (intersections), mid-link sources and sinks etc. and needs to be addressed.
Resumo:
Raman spectroscopy, when used in spatially offset mode, has become a potential tool for the identification of explosives and other hazardous substances concealed in opaque containers. The molecular fingerprinting capability of Raman spectroscopy makes it an attractive tool for the unambiguous identification of hazardous substances in the field. Additionally, minimal sample preparation is required compared with other techniques. We report a field portable time resolved Raman sensor for the detection of concealed chemical hazards in opaque containers. The new sensor uses a pulsed nanosecond laser source in conjunction with an intensified CCD detector. The new sensor employs a combination of time and space resolved Raman spectroscopy to enhance the detection capability. The new sensor can identify concealed hazards by a single measurement without any chemometric data treatments.
Resumo:
A new spatial logic encompassing redefined concepts of time and place, space and distance, requires a comprehensive shift in the approach to designing workplace environments for today’s adaptive, collaborative organizations operating in a dynamic business world. Together with substantial economic and cultural shifts and an increased emphasis on lifestyle considerations, the advances in information technology have prompted a radical re-ordering of organizational relationships and the associated structures, processes, and places of doing business. Within the duality of space and an augmentation of the traditional notions of place, organizational and institutional structures pose new challenges for the design professions. The literature reveals that there has always been a mono-organizational focus in relation to workplace design strategies and the burgeoning trend towards inter-organizational collaboration, enabled the identification of a gap in the knowledge relative to workplace design. The NetWorkPlaceTM© constitutes a multi-dimensional concept having the capacity to deal with the fluidity and ambiguity characteristic of the network context, as both a topic of research and the way of going about it.
Resumo:
This year marks the completion of data collection for year three (Wave 3) of the CAUSEE study. This report uses data from the first three years and focuses on the process of learning and adaptation in the business creation process. Most start-ups need to change their business model, their product, their marketing plan, their market or something else about the business to be successful. PayPal changed their product at least five times, moving from handheld security, to enterprise apps, to consumer apps, to a digital wallet, to payments between handhelds before finally stumbling on the model that made the a multi-billion dollar company revolving around email-based payments. PayPal is not alone and anecdotes abounds of start-ups changing direction: Sysmantec started as an artificial intelligence company, Apple started selling plans to build computers and Microsoft tried to peddle compilers before licensing an operating system out of New Mexico. To what extent do Australian new ventures change and adapt as their ideas and business develop? As a longitudinal study, CAUSEE was designed specifically to observe development in the venture creation process. In this research briefing paper, we compare development over time of randomly sampled Nascent Firms (NF) and Young Firms(YF), concentrating on the surviving cases. We also compare NFs with YFs at each yearly interval. The 'high potential' over sample is not used in this report.
Resumo:
2010 marked the completion of data collection for year three (Wave 3) of the CAUSEE study. This report uses data from the first three years. Australia's population is noted for its mixed international background. The ABS 2006 census reports showed that almost a quarter of Australian population were born overseas, contributing to a high degree of cultural diversity. This report examines the international background and experience of Australian business founders as well as their aspired and actual participation in international markets. In this research briefing paper, we compare Nascent Firm (NF) and Young Firm (YF) groups and also 'Regular' start-ups in both categories with their High Potential counterparts. When we compare characteristics at one point in time and we compare developments over time. Unless otherwise stated the findings we comment on are 'statistically significant'. That is, there is less than 5 per cent risk that they would appear by chance if there is no true difference in the population form which the samples were drawn.
Resumo:
In Australia, as elsewhere, universities are being encouraged to grow their postgraduate research candidature base while at the same time there is increasing pressure on resources with which to manage the burgeoning groups. In this environment HDR supervision strategies are seen as increasingly important as research managers seek the best possible ‘fit’ for an applicant: the candidate who will provide a sound return on investment and demonstrate endurance in the pursuit of a timely completion. As research managers know, the admissions process can be a risky business. The process may be tested further in the context of the new models of doctoral cohort supervision that are being discussed in the higher degree research management sector. The focus of this paper is an examination of the results of investigations of two models of postgraduate cohort supervision in the creative arts Master of Arts research program at QUT with a view to identifying attributes that may be useful for the formation of cohort models of supervision in the doctoral area.
Resumo:
For over half a century, it has been known that the rate of morphological evolution appears to vary with the time frame of measurement. Rates of microevolutionary change, measured between successive generations, were found to be far higher than rates of macroevolutionary change inferred from the fossil record. More recently, it has been suggested that rates of molecular evolution are also time dependent, with the estimated rate depending on the timescale of measurement. This followed surprising observations that estimates of mutation rates, obtained in studies of pedigrees and laboratory mutation-accumulation lines, exceeded long-term substitution rates by an order of magnitude or more. Although a range of studies have provided evidence for such a pattern, the hypothesis remains relatively contentious. Furthermore, there is ongoing discussion about the factors that can cause molecular rate estimates to be dependent on time. Here we present an overview of our current understanding of time-dependent rates. We provide a summary of the evidence for time-dependent rates in animals, bacteria and viruses. We review the various biological and methodological factors that can cause rates to be time dependent, including the effects of natural selection, calibration errors, model misspecification and other artefacts. We also describe the challenges in calibrating estimates of molecular rates, particularly on the intermediate timescales that are critical for an accurate characterization of time-dependent rates. This has important consequences for the use of molecular-clock methods to estimate timescales of recent evolutionary events.
Resumo:
Determining the temporal scale of biological evolution has traditionally been the preserve of paleontology, with the timing of species originations and major diversifications all being read from the fossil record. However, the ages of the earliest (correctly identified) records will underestimate actual origins due to the incomplete nature of the fossil record and the necessity for lineages to have evolved sufficiently divergent morphologies in order to be distinguished. The possibility of inferring divergence times more accurately has been promoted by the idea that the accumulation of genetic change between modern lineages can be used as a molecular clock (Zuckerkandl and Pauling, 1965). In practice, though, molecular dates have often been so old as to be incongruent even with liberal readings of the fossil record. Prominent examples include inferred diversifications of metazoan phyla hundreds of millions of years before their Cambrian fossil record appearances (e.g., Nei et al., 2001) and a basal split between modern birds (Neoaves) that is almost double the age of their earliest recognizable fossils (e.g., Cooper and Penny, 1997).
Time dependency of molecular rate estimates and systematic overestimation of recent divergence times
Resumo:
Studies of molecular evolutionary rates have yielded a wide range of rate estimates for various genes and taxa. Recent studies based on population-level and pedigree data have produced remarkably high estimates of mutation rate, which strongly contrast with substitution rates inferred in phylogenetic (species-level) studies. Using Bayesian analysis with a relaxed-clock model, we estimated rates for three groups of mitochondrial data: avian protein-coding genes, primate protein-coding genes, and primate d-loop sequences. In all three cases, we found a measurable transition between the high, short-term (<1–2 Myr) mutation rate and the low, long-term substitution rate. The relationship between the age of the calibration and the rate of change can be described by a vertically translated exponential decay curve, which may be used for correcting molecular date estimates. The phylogenetic substitution rates in mitochondria are approximately 0.5% per million years for avian protein-coding sequences and 1.5% per million years for primate protein-coding and d-loop sequences. Further analyses showed that purifying selection offers the most convincing explanation for the observed relationship between the estimated rate and the depth of the calibration. We rule out the possibility that it is a spurious result arising from sequence errors, and find it unlikely that the apparent decline in rates over time is caused by mutational saturation. Using a rate curve estimated from the d-loop data, several dates for last common ancestors were calculated: modern humans and Neandertals (354 ka; 222–705 ka), Neandertals (108 ka; 70–156 ka), and modern humans (76 ka; 47–110 ka). If the rate curve for a particular taxonomic group can be accurately estimated, it can be a useful tool for correcting divergence date estimates by taking the rate decay into account. Our results show that it is invalid to extrapolate molecular rates of change across different evolutionary timescales, which has important consequences for studies of populations, domestication, conservation genetics, and human evolution.
Resumo:
Long-term changes in the genetic composition of a population occur by the fixation of new mutations, a process known as substitution. The rate at which mutations arise in a population and the rate at which they are fixed are expected to be equal under neutral conditions (Kimura, 1968). Between the appearance of a new mutation and its eventual fate of fixation or loss, there will be a period in which it exists as a transient polymorphism in the population (Kimura and Ohta, 1971). If the majority of mutations are deleterious (and nonlethal), the fixation probabilities of these transient polymorphisms are reduced and the mutation rate will exceed the substitution rate (Kimura, 1983). Consequently, different apparent rates may be observed on different time scales of the molecular evolutionary process (Penny, 2005; Penny and Holmes, 2001). The substitution rate of the mitochondrial protein-coding genes of birds and mammals has been traditionally recognized to be about 0.01 substitutions/site/million years (Myr) (Brown et al., 1979; Ho, 2007; Irwin et al., 1991; Shields and Wilson, 1987), with the noncoding D-loop evolving several times more quickly (e.g., Pesole et al., 1992; Quinn, 1992). Over the past decade, there has been mounting evidence that instantaneous mutation rates substantially exceed substitution rates, in a range of organisms (e.g., Denver et al., 2000; Howell et al., 2003; Lambert et al., 2002; Mao et al., 2006; Mumm et al., 1997; Parsons et al., 1997; Santos et al., 2005). The immediate reaction to the first of these findings was that the polymorphisms generated by the elevated mutation rate are short-lived, perhaps extending back only a few hundred years (Gibbons, 1998; Macaulay et al., 1997). That is, purifying selection was thought to remove these polymorphisms very rapidly.
Resumo:
Fractional differential equations are becoming more widely accepted as a powerful tool in modelling anomalous diffusion, which is exhibited by various materials and processes. Recently, researchers have suggested that rather than using constant order fractional operators, some processes are more accurately modelled using fractional orders that vary with time and/or space. In this paper we develop computationally efficient techniques for solving time-variable-order time-space fractional reaction-diffusion equations (tsfrde) using the finite difference scheme. We adopt the Coimbra variable order time fractional operator and variable order fractional Laplacian operator in space where both orders are functions of time. Because the fractional operator is nonlocal, it is challenging to efficiently deal with its long range dependence when using classical numerical techniques to solve such equations. The novelty of our method is that the numerical solution of the time-variable-order tsfrde is written in terms of a matrix function vector product at each time step. This product is approximated efficiently by the Lanczos method, which is a powerful iterative technique for approximating the action of a matrix function by projecting onto a Krylov subspace. Furthermore an adaptive preconditioner is constructed that dramatically reduces the size of the required Krylov subspaces and hence the overall computational cost. Numerical examples, including the variable-order fractional Fisher equation, are presented to demonstrate the accuracy and efficiency of the approach.