916 resultados para TIME TRENDS
Resumo:
The article discusses evidence that time prevented many students from showing what they could do in the 2010 Year 7 and 9 NAPLAN numeracy tests. In addition to analysing the available data, the article discusses some NAPLAN numeracy questions that contribute to this problem. It is suggested that schools should investigate whether time limitation is a problem for their own students. The article discusses the implications of these findings for teachers preparing students for NAPLAN tests and for the developers of the tests.
Resumo:
Precise identification of the time when a change in a hospital outcome has occurred enables clinical experts to search for a potential special cause more effectively. In this paper, we develop change point estimation methods for survival time of a clinical procedure in the presence of patient mix in a Bayesian framework. We apply Bayesian hierarchical models to formulate the change point where there exists a step change in the mean survival time of patients who underwent cardiac surgery. The data are right censored since the monitoring is conducted over a limited follow-up period. We capture the effect of risk factors prior to the surgery using a Weibull accelerated failure time regression model. Markov Chain Monte Carlo is used to obtain posterior distributions of the change point parameters including location and magnitude of changes and also corresponding probabilistic intervals and inferences. The performance of the Bayesian estimator is investigated through simulations and the result shows that precise estimates can be obtained when they are used in conjunction with the risk-adjusted survival time CUSUM control charts for different magnitude scenarios. The proposed estimator shows a better performance where a longer follow-up period, censoring time, is applied. In comparison with the alternative built-in CUSUM estimator, more accurate and precise estimates are obtained by the Bayesian estimator. These superiorities are enhanced when probability quantification, flexibility and generalizability of the Bayesian change point detection model are also considered.
Resumo:
This paper presents new research methods that combine the use of location-based, social media on mobile phones with geographic information systems (GIS) to explore connections between people, place and health. It discusses the feasibility, limitations, and benefits of using these methods, which enable real-time, location-based, quantitative data to be collected on the recreation, consumption, and physical activity patterns of urban residents in Brisbane, Queensland. The study employs mechanisms already inherent in popular mobile social media applications (Facebook, Twitter and Foursquare) to collect this data. The research methods presented in this paper are innovative and potentially applicable to an increasing number of academic research areas, as well as to a growing range of service providers that benefit from monitoring consumer behaviour, and responding to emerging changes in these patterns and trends. The ability to both collect and map objective, real-time data about the consumption, leisure, recreation, and physical activity patterns amongst urban communities has direct implications for a range of research disciplines including media studies, advertising, health promotion, social marketing, public health inequalities, and urban design.
Resumo:
Background Although risk of human papillomavirus (HPV)–associated cancers of the anus, cervix, oropharynx, penis, vagina, and vulva is increased among persons with AIDS, the etiologic role of immunosuppression is unclear and incidence trends for these cancers over time, particularly after the introduction of highly active antiretroviral therapy in 1996, are not well described. Methods Data on 499 230 individuals diagnosed with AIDS from January 1, 1980, through December 31, 2004, were linked with cancer registries in 15 US regions. Risk of in situ and invasive HPV-associated cancers, compared with that in the general population, was measured by use of standardized incidence ratios (SIRs) and 95% confidence intervals (CIs). We evaluated the relationship of immunosuppression with incidence during the period of 4–60 months after AIDS onset by use of CD4 T-cell counts measured at AIDS onset. Incidence during the 4–60 months after AIDS onset was compared across three periods (1980–1989, 1990–1995, and 1996–2004). All statistical tests were two-sided. Results Among persons with AIDS, we observed statistically significantly elevated risk of all HPV-associated in situ (SIRs ranged from 8.9, 95% CI = 8.0 to 9.9, for cervical cancer to 68.6, 95% CI = 59.7 to 78.4, for anal cancer among men) and invasive (SIRs ranged from 1.6, 95% CI = 1.2 to 2.1, for oropharyngeal cancer to 34.6, 95% CI = 30.8 to 38.8, for anal cancer among men) cancers. During 1996–2004, low CD4 T-cell count was associated with statistically significantly increased risk of invasive anal cancer among men (relative risk [RR] per decline of 100 CD4 T cells per cubic millimeter = 1.34, 95% CI = 1.08 to 1.66, P = .006) and non–statistically significantly increased risk of in situ vagina or vulva cancer (RR = 1.52, 95% CI = 0.99 to 2.35, P = .055) and of invasive cervical cancer (RR = 1.32, 95% CI = 0.96 to 1.80, P = .077). Among men, incidence (per 100 000 person-years) of in situ and invasive anal cancer was statistically significantly higher during 1996–2004 than during 1990–1995 (61% increase for in situ cancers, 18.3 cases vs 29.5 cases, respectively; RR = 1.71, 95% CI = 1.24 to 2.35, P < .001; and 104% increase for invasive cancers, 20.7 cases vs 42.3 cases, respectively; RR = 2.03, 95% CI = 1.54 to 2.68, P < .001). Incidence of other cancers was stable over time. Conclusions Risk of HPV-associated cancers was elevated among persons with AIDS and increased with increasing immunosuppression. The increasing incidence for anal cancer during 1996–2004 indicates that prolonged survival may be associated with increased risk of certain HPV-associated cancers.
Resumo:
The concept of local accumulation time (LAT) was introduced by Berezhkovskii and coworkers in 2010–2011 to give a finite measure of the time required for the transient solution of a reaction–diffusion equation to approach the steady–state solution (Biophys J. 99, L59 (2010); Phys Rev E. 83, 051906 (2011)). Such a measure is referred to as a critical time. Here, we show that LAT is, in fact, identical to the concept of mean action time (MAT) that was first introduced by McNabb in 1991 (IMA J Appl Math. 47, 193 (1991)). Although McNabb’s initial argument was motivated by considering the mean particle lifetime (MPLT) for a linear death process, he applied the ideas to study diffusion. We extend the work of these authors by deriving expressions for the MAT for a general one–dimensional linear advection–diffusion–reaction problem. Using a combination of continuum and discrete approaches, we show that MAT and MPLT are equivalent for certain uniform–to-uniform transitions; these results provide a practical interpretation for MAT, by directly linking the stochastic microscopic processes to a meaningful macroscopic timescale. We find that for more general transitions, the equivalence between MAT and MPLT does not hold. Unlike other critical time definitions, we show that it is possible to evaluate the MAT without solving the underlying partial differential equation (pde). This makes MAT a simple and attractive quantity for practical situations. Finally, our work explores the accuracy of certain approximations derived using the MAT, showing that useful approximations for nonlinear kinetic processes can be obtained, again without treating the governing pde directly.
Resumo:
Driver response (reaction) time (tr) of the second queuing vehicle is generally longer than other vehicles at signalized intersections. Though this phenomenon was revealed in 1972, the above factor is still ignored in conventional departure models. This paper highlights the need for quantitative measurements and analysis of queuing vehicle performance in spontaneous discharge pattern because it can improve microsimulation. Video recording from major cities in Australia plus twenty two sets of vehicle trajectories extracted from the Next Generation Simulation (NGSIM) Peachtree Street Dataset have been analyzed to better understand queuing vehicle performance in the discharge process. Findings from this research will alleviate driver response time and also can be used for the calibration of the microscopic traffic simulation model.
Resumo:
This paper presents the benefits and issues related to travel time prediction on urban network. Travel time information quantifies congestion and is perhaps the most important network performance measure. Travel time prediction has been an active area of research for the last five decades. The activities related to ITS have increased the attention of researchers for better and accurate real-time prediction of travel time. Majority of the literature on travel time prediction is applicable to freeways where, under non-incident conditions, traffic flow is not affected by external factors such as traffic control signals and opposing traffic flows. On urban environment the problem is more complicated due to conflicting areas (intersections), mid-link sources and sinks etc. and needs to be addressed.
Resumo:
Raman spectroscopy, when used in spatially offset mode, has become a potential tool for the identification of explosives and other hazardous substances concealed in opaque containers. The molecular fingerprinting capability of Raman spectroscopy makes it an attractive tool for the unambiguous identification of hazardous substances in the field. Additionally, minimal sample preparation is required compared with other techniques. We report a field portable time resolved Raman sensor for the detection of concealed chemical hazards in opaque containers. The new sensor uses a pulsed nanosecond laser source in conjunction with an intensified CCD detector. The new sensor employs a combination of time and space resolved Raman spectroscopy to enhance the detection capability. The new sensor can identify concealed hazards by a single measurement without any chemometric data treatments.
Resumo:
The fashion ecosystem is at boiling point as consumers turn up the heat in all areas of the fashion value, trend and supply chain. While traditionally fashion has been a monologue from designer brand to consumer, new technology and the virtual world has given consumers a voice to engage brands in a conversation to express evolving needs, ideas and feedback. Product customisation is no longer innovative. Successful brands are including customers in the design process and holding conversations ‘with’ them to improve product, manufacturing, sales, distribution, marketing and sustainable business practices. Co-creation and crowd sourcing are integral to any successful business model and designers and manufacturers are supplying the technology or tools for these creative, active, participatory ‘prosumers’. With this collaboration however, there arises a worrying trend for fashion professionals. The ‘design it yourself’, ‘indiepreneur’ who with the combination of technology, the internet, excess manufacturing capacity, crowd funding and the idea of sharing the creative integrity of a product (‘copyleft’ not copyright) is challenging the notion that the fashion supply chain is complex. The passive ‘consumer’ no longer exists. Fashion designers now share the stage with ‘amateur’ creators who are disrupting every activity they touch, while being motivated by profit as well as a quest for originality and innovation. This paper examines the effects this ‘consumer’ engagement is having on traditional fashion models and the fashion supply chain. Crowd sourcing, crowd funding, co-creating, design it yourself, global sourcing, the virtual supply chain, social media, online shopping, group buying, consumer to consumer marketing and retail, and branding the ‘individual’ are indicative of the new consumer-driven fashion models. Consumers now drive the fashion industry - from setting trends, through to creating, producing, selling and marketing product. They can turn up the heat at any time _ and any point _ in the fashion supply chain. They are raising the temperature at each and every stage of the chain, decreasing or eliminating the processes involved: decreasing the risk of fashion obsolescence, quantities for manufacture, complexity of distribution and the consumption of product; eliminating certain stages altogether and limiting the brand as custodians of marketing. Some brands are discovering a new ‘enemy’ – the very people they are trying to sell to. Keywords: fashion supply chain, virtual world, consumer, ‘prosumers’, co-creation, fashion designers
Resumo:
A new spatial logic encompassing redefined concepts of time and place, space and distance, requires a comprehensive shift in the approach to designing workplace environments for today’s adaptive, collaborative organizations operating in a dynamic business world. Together with substantial economic and cultural shifts and an increased emphasis on lifestyle considerations, the advances in information technology have prompted a radical re-ordering of organizational relationships and the associated structures, processes, and places of doing business. Within the duality of space and an augmentation of the traditional notions of place, organizational and institutional structures pose new challenges for the design professions. The literature reveals that there has always been a mono-organizational focus in relation to workplace design strategies and the burgeoning trend towards inter-organizational collaboration, enabled the identification of a gap in the knowledge relative to workplace design. The NetWorkPlaceTM© constitutes a multi-dimensional concept having the capacity to deal with the fluidity and ambiguity characteristic of the network context, as both a topic of research and the way of going about it.
Resumo:
For over half a century, it has been known that the rate of morphological evolution appears to vary with the time frame of measurement. Rates of microevolutionary change, measured between successive generations, were found to be far higher than rates of macroevolutionary change inferred from the fossil record. More recently, it has been suggested that rates of molecular evolution are also time dependent, with the estimated rate depending on the timescale of measurement. This followed surprising observations that estimates of mutation rates, obtained in studies of pedigrees and laboratory mutation-accumulation lines, exceeded long-term substitution rates by an order of magnitude or more. Although a range of studies have provided evidence for such a pattern, the hypothesis remains relatively contentious. Furthermore, there is ongoing discussion about the factors that can cause molecular rate estimates to be dependent on time. Here we present an overview of our current understanding of time-dependent rates. We provide a summary of the evidence for time-dependent rates in animals, bacteria and viruses. We review the various biological and methodological factors that can cause rates to be time dependent, including the effects of natural selection, calibration errors, model misspecification and other artefacts. We also describe the challenges in calibrating estimates of molecular rates, particularly on the intermediate timescales that are critical for an accurate characterization of time-dependent rates. This has important consequences for the use of molecular-clock methods to estimate timescales of recent evolutionary events.
Resumo:
Determining the temporal scale of biological evolution has traditionally been the preserve of paleontology, with the timing of species originations and major diversifications all being read from the fossil record. However, the ages of the earliest (correctly identified) records will underestimate actual origins due to the incomplete nature of the fossil record and the necessity for lineages to have evolved sufficiently divergent morphologies in order to be distinguished. The possibility of inferring divergence times more accurately has been promoted by the idea that the accumulation of genetic change between modern lineages can be used as a molecular clock (Zuckerkandl and Pauling, 1965). In practice, though, molecular dates have often been so old as to be incongruent even with liberal readings of the fossil record. Prominent examples include inferred diversifications of metazoan phyla hundreds of millions of years before their Cambrian fossil record appearances (e.g., Nei et al., 2001) and a basal split between modern birds (Neoaves) that is almost double the age of their earliest recognizable fossils (e.g., Cooper and Penny, 1997).
Time dependency of molecular rate estimates and systematic overestimation of recent divergence times
Resumo:
Studies of molecular evolutionary rates have yielded a wide range of rate estimates for various genes and taxa. Recent studies based on population-level and pedigree data have produced remarkably high estimates of mutation rate, which strongly contrast with substitution rates inferred in phylogenetic (species-level) studies. Using Bayesian analysis with a relaxed-clock model, we estimated rates for three groups of mitochondrial data: avian protein-coding genes, primate protein-coding genes, and primate d-loop sequences. In all three cases, we found a measurable transition between the high, short-term (<1–2 Myr) mutation rate and the low, long-term substitution rate. The relationship between the age of the calibration and the rate of change can be described by a vertically translated exponential decay curve, which may be used for correcting molecular date estimates. The phylogenetic substitution rates in mitochondria are approximately 0.5% per million years for avian protein-coding sequences and 1.5% per million years for primate protein-coding and d-loop sequences. Further analyses showed that purifying selection offers the most convincing explanation for the observed relationship between the estimated rate and the depth of the calibration. We rule out the possibility that it is a spurious result arising from sequence errors, and find it unlikely that the apparent decline in rates over time is caused by mutational saturation. Using a rate curve estimated from the d-loop data, several dates for last common ancestors were calculated: modern humans and Neandertals (354 ka; 222–705 ka), Neandertals (108 ka; 70–156 ka), and modern humans (76 ka; 47–110 ka). If the rate curve for a particular taxonomic group can be accurately estimated, it can be a useful tool for correcting divergence date estimates by taking the rate decay into account. Our results show that it is invalid to extrapolate molecular rates of change across different evolutionary timescales, which has important consequences for studies of populations, domestication, conservation genetics, and human evolution.
Resumo:
Long-term changes in the genetic composition of a population occur by the fixation of new mutations, a process known as substitution. The rate at which mutations arise in a population and the rate at which they are fixed are expected to be equal under neutral conditions (Kimura, 1968). Between the appearance of a new mutation and its eventual fate of fixation or loss, there will be a period in which it exists as a transient polymorphism in the population (Kimura and Ohta, 1971). If the majority of mutations are deleterious (and nonlethal), the fixation probabilities of these transient polymorphisms are reduced and the mutation rate will exceed the substitution rate (Kimura, 1983). Consequently, different apparent rates may be observed on different time scales of the molecular evolutionary process (Penny, 2005; Penny and Holmes, 2001). The substitution rate of the mitochondrial protein-coding genes of birds and mammals has been traditionally recognized to be about 0.01 substitutions/site/million years (Myr) (Brown et al., 1979; Ho, 2007; Irwin et al., 1991; Shields and Wilson, 1987), with the noncoding D-loop evolving several times more quickly (e.g., Pesole et al., 1992; Quinn, 1992). Over the past decade, there has been mounting evidence that instantaneous mutation rates substantially exceed substitution rates, in a range of organisms (e.g., Denver et al., 2000; Howell et al., 2003; Lambert et al., 2002; Mao et al., 2006; Mumm et al., 1997; Parsons et al., 1997; Santos et al., 2005). The immediate reaction to the first of these findings was that the polymorphisms generated by the elevated mutation rate are short-lived, perhaps extending back only a few hundred years (Gibbons, 1998; Macaulay et al., 1997). That is, purifying selection was thought to remove these polymorphisms very rapidly.