85 resultados para Spot
Resumo:
We determined the foveal Stiles-Crawford effect (SCE) as a function of up to 8D accommodation stimulus in six young emmetropes and six young myopes using a psychophysical two-channel Maxwellian system in which the threshold luminance increment of a 1 mm spot entering through variable positions in the pupil was determined against a background formed by a 4 mm spot entering the pupil centrally. The SCE became steeper in both groups with increasing accommodation stimulus, but with no systematic shift of the peak. Combining the data of both groups gave significant increases in directionality of 15-20% in horizontal and vertical pupil meridians with 6D of accommodation. However, additional experiments indicated that much of this was an artefact of higher order aberrations and accommodative lag. Thus, there appears to be little changes in orientation or directionality in the SCE with accommodation stimulus levels up to 6 D, but it is possible that changes may occur at very high accommodation levels
Resumo:
OBJECTIVES: To quantify the driving difficulties of older adults using a detailed assessment of driving performance and to link this with self-reported retrospective and prospective crashes. DESIGN: Prospective cohort study. SETTING: On-road driving assessment. PARTICIPANTS: Two hundred sixty-seven community-living adults aged 70 to 88 randomly recruited through the electoral roll. MEASUREMENTS: Performance on a standardized measure of driving performance. RESULTS: Lane positioning, approach, and blind spot monitoring were the most common error types, and errors occurred most frequently in situations involving merging and maneuvering. Drivers reporting more retrospective or prospective crashes made significantly more driving errors. Driver instructor interventions during self-navigation (where the instructor had to brake or take control of the steering to avoid an accident) were significantly associated with higher retrospective and prospective crashes; every instructor intervention almost doubled prospective crash risk. CONCLUSION: These findings suggest that on-road driving assessment provides useful information on older driver difficulties, with the self-directed component providing the most valuable information.
Resumo:
Financial processes may possess long memory and their probability densities may display heavy tails. Many models have been developed to deal with this tail behaviour, which reflects the jumps in the sample paths. On the other hand, the presence of long memory, which contradicts the efficient market hypothesis, is still an issue for further debates. These difficulties present challenges with the problems of memory detection and modelling the co-presence of long memory and heavy tails. This PhD project aims to respond to these challenges. The first part aims to detect memory in a large number of financial time series on stock prices and exchange rates using their scaling properties. Since financial time series often exhibit stochastic trends, a common form of nonstationarity, strong trends in the data can lead to false detection of memory. We will take advantage of a technique known as multifractal detrended fluctuation analysis (MF-DFA) that can systematically eliminate trends of different orders. This method is based on the identification of scaling of the q-th-order moments and is a generalisation of the standard detrended fluctuation analysis (DFA) which uses only the second moment; that is, q = 2. We also consider the rescaled range R/S analysis and the periodogram method to detect memory in financial time series and compare their results with the MF-DFA. An interesting finding is that short memory is detected for stock prices of the American Stock Exchange (AMEX) and long memory is found present in the time series of two exchange rates, namely the French franc and the Deutsche mark. Electricity price series of the five states of Australia are also found to possess long memory. For these electricity price series, heavy tails are also pronounced in their probability densities. The second part of the thesis develops models to represent short-memory and longmemory financial processes as detected in Part I. These models take the form of continuous-time AR(∞) -type equations whose kernel is the Laplace transform of a finite Borel measure. By imposing appropriate conditions on this measure, short memory or long memory in the dynamics of the solution will result. A specific form of the models, which has a good MA(∞) -type representation, is presented for the short memory case. Parameter estimation of this type of models is performed via least squares, and the models are applied to the stock prices in the AMEX, which have been established in Part I to possess short memory. By selecting the kernel in the continuous-time AR(∞) -type equations to have the form of Riemann-Liouville fractional derivative, we obtain a fractional stochastic differential equation driven by Brownian motion. This type of equations is used to represent financial processes with long memory, whose dynamics is described by the fractional derivative in the equation. These models are estimated via quasi-likelihood, namely via a continuoustime version of the Gauss-Whittle method. The models are applied to the exchange rates and the electricity prices of Part I with the aim of confirming their possible long-range dependence established by MF-DFA. The third part of the thesis provides an application of the results established in Parts I and II to characterise and classify financial markets. We will pay attention to the New York Stock Exchange (NYSE), the American Stock Exchange (AMEX), the NASDAQ Stock Exchange (NASDAQ) and the Toronto Stock Exchange (TSX). The parameters from MF-DFA and those of the short-memory AR(∞) -type models will be employed in this classification. We propose the Fisher discriminant algorithm to find a classifier in the two and three-dimensional spaces of data sets and then provide cross-validation to verify discriminant accuracies. This classification is useful for understanding and predicting the behaviour of different processes within the same market. The fourth part of the thesis investigates the heavy-tailed behaviour of financial processes which may also possess long memory. We consider fractional stochastic differential equations driven by stable noise to model financial processes such as electricity prices. The long memory of electricity prices is represented by a fractional derivative, while the stable noise input models their non-Gaussianity via the tails of their probability density. A method using the empirical densities and MF-DFA will be provided to estimate all the parameters of the model and simulate sample paths of the equation. The method is then applied to analyse daily spot prices for five states of Australia. Comparison with the results obtained from the R/S analysis, periodogram method and MF-DFA are provided. The results from fractional SDEs agree with those from MF-DFA, which are based on multifractal scaling, while those from the periodograms, which are based on the second order, seem to underestimate the long memory dynamics of the process. This highlights the need and usefulness of fractal methods in modelling non-Gaussian financial processes with long memory.
Resumo:
The attention paid by the British music press in 1976 to the release of The Saints first single “I’m Stranded” was the trigger for a commercial and academic interest in the Brisbane music scene which still has significant energy. In 2007, Brisbane was identifed by Billboard Magazine as a “hot spot” of independent music. A place to watch. Someone turned a torch on this town, had a quick look, moved on. But this town has always had music in it. Some of it made by me. So, I’m taking this connection of mine, and working it into a contextual historical analysis of the creative lives of Brisbane musicians. I will be interviewing a number of Brisbane musicians. These interviews have begun, and will continue to be be conducted in 2011/2012. I will ask questions and pursue memories that will encompass family, teenage years, siblings, the suburbs, the city, venues, television and radio; but then widen to welcome the river, the hills and mountains, foes and friends, beliefs and death. The wider research will be a contextual historical analysis of the creative lives of Brisbane musicians. It will explore the changing nature of their work practices over time and will consider the notion, among other factors, of ‘place’ in both their creative practice and their creative output. It will also examine how the presence of the practitioners and their work is seen to contribute to the cultural life of the city and the creative lives of its citizens into the future. This paper offers an analysis of this last notion: how does this city see its music-makers? In addition to the interviews, over 300 Brisbane musicians were surveyed in September 2009 as part of a QUT-initiated recorded music event (BIGJAM). Their responses will inform the production of this paper.
Resumo:
Chlamydia pneumoniae is a common human and animal pathogen associated with a wide range of upper and lower respiratory tract infections. In more recent years there has been increasing evidence to suggest a link between C. pneumoniae and chronic diseases in humans, including atherosclerosis, stroke and Alzheimer’s disease. C. pneumoniae human strains show little genetic variation, indicating that the human-derived strain originated from a common ancestor in the recent past. Despite extensive information on the genetics and morphology processes of the human strain, knowledge concerning many other hosts (including marsupials, amphibians, reptiles and equines) remains virtually unexplored. The koala (Phascolarctos cinereus) is a native Australian marsupial under threat due to habitat loss, predation and disease. Koalas are very susceptible to chlamydial infections, most commonly affecting the conjunctiva, urogenital tract and/or respiratory tract. To address this gap in the literature, the present study (i) provides a detailed description of the morphologic and genomic architecture of the C. pneumoniae koala (and human) strain, and shows that the koala strain is microscopically, developmentally and genetically distinct from the C. pneumoniae human strain, and (ii) examines the genetic relationship of geographically diverse C. pneumoniae isolates from human, marsupial, amphibian, reptilian and equine hosts, and identifies two distinct lineages that have arisen from animal-to-human cross species transmissions. Chapter One of this thesis explores the scientific problem and aims of this study, while Chapter Two provides a detailed literature review of the background in this field of work. Chapter Three, the first results chapter, describes the morphology and developmental stages of C. pneumoniae koala isolate LPCoLN, as revealed by fluorescence and transmission electron microscopy. The profile of this isolate, when cultured in HEp-2 human epithelial cells, was quite different to the human AR39 isolate. Koala LPCoLN inclusions were larger; the elementary bodies did not have the characteristic pear-shaped appearance, and the developmental cycle was completed within a shorter period of time (as confirmed by quantitative real-time PCR). These in vitro findings might reflect biological differences between koala LPCoLN and human AR39 in vivo. Chapter Four describes the complete genome sequence of the koala respiratory pathogen, C. pneumoniae LPCoLN. This is the first animal isolate of C. pneumoniae to be fully-sequenced. The genome sequence provides new insights into genomic ‘plasticity’ (organisation), evolution and biology of koala LPCoLN, relative to four complete C. pneumoniae human genomes (AR39, CWL029, J138 and TW183). Koala LPCoLN contains a plasmid that is not shared with any of the human isolates, there is evidence of gene loss in nucleotide salvage pathways, and there are 10 hot spot genomic regions of variation that were previously not identified in the C. pneumoniae human genomes. Sequence (partial-length) from a second, independent, wild koala isolate (EBB) at several gene loci confirmed that the koala LPCoLN isolate was representative of a koala C. pneumoniae strain. The combined sequence data provides evidence that the C. pneumoniae animal (koala LPCoLN) genome is ancestral to the C. pneumoniae human genomes and that human infections may have originated from zoonotic infections. Chapter Five examines key genome components of the five C. pneumoniae genomes in more detail. This analysis reveals genomic features that are shared by and/or contribute to the broad ecological adaptability and evolution of C. pneumoniae. This analysis resulted in the identification of 65 gene sequences for further analysis of intraspecific variation, and revealed some interesting differences, including fragmentation, truncation and gene decay (loss of redundant ancestral traits). This study provides valuable insights into metabolic diversity, adaptation and evolution of C. pneumoniae. Chapter Six utilises a subset of 23 target genes identified from the previous genomic comparisons and makes a significant contribution to our understanding of genetic variability among C. pneumoniae human (11) and animal (6 amphibian, 5 reptilian, 1 equine and 7 marsupial hosts) isolates. It has been shown that the animal isolates are genetically diverse, unlike the human isolates that are virtually clonal. More convincing evidence that C. pneumoniae originated in animals and recently (in the last few hundred thousand years) crossed host species to infect humans is provided in this study. It is proposed that two animal-to-human cross species events have occurred in the context of the results, one evident by the nearly clonal human genotype circulating in the world today, and the other by a more animal-like genotype apparent in Indigenous Australians. Taken together, these data indicate that the C. pneumoniae koala LPCoLN isolate has morphologic and genomic characteristics that are distinct from the human isolates. These differences may affect the survival and activity of the C. pneumoniae koala pathogen in its natural host, in vivo. This study, by utilising the genetic diversity of C. pneumoniae, identified new genetic markers for distinguishing human and animal isolates. However, not all C. pneumoniae isolates were genetically diverse; in fact, several isolates were highly conserved, if not identical in sequence (i.e. Australian marsupials) emphasising that at some stage in the evolution of this pathogen, there has been an adaptation/s to a particular host, providing some stability in the genome. The outcomes of this study by experimental and bioinformatic approaches have significantly enhanced our knowledge of the biology of this pathogen and will advance opportunities for the investigation of novel vaccine targets, antimicrobial therapy, or blocking of pathogenic pathways.
Resumo:
Regional safety program managers face a daunting challenge in the attempt to reduce deaths, injuries, and economic losses that result from motor vehicle crashes. This difficult mission is complicated by the combination of a large perceived need, small budget, and uncertainty about how effective each proposed countermeasure would be if implemented. A manager can turn to the research record for insight, but the measured effect of a single countermeasure often varies widely from study to study and across jurisdictions. The challenge of converting widespread and conflicting research results into a regionally meaningful conclusion can be addressed by incorporating "subjective" information into a Bayesian analysis framework. Engineering evaluations of crashes provide the subjective input on countermeasure effectiveness in the proposed Bayesian analysis framework. Empirical Bayes approaches are widely used in before-and-after studies and "hot-spot" identification; however, in these cases, the prior information was typically obtained from the data (empirically), not subjective sources. The power and advantages of Bayesian methods for assessing countermeasure effectiveness are presented. Also, an engineering evaluation approach developed at the Georgia Institute of Technology is described. Results are presented from an experiment conducted to assess the repeatability and objectivity of subjective engineering evaluations. In particular, the focus is on the importance, methodology, and feasibility of the subjective engineering evaluation for assessing countermeasures.
Resumo:
Identification of hot spots, also known as the sites with promise, black spots, accident-prone locations, or priority investigation locations, is an important and routine activity for improving the overall safety of roadway networks. Extensive literature focuses on methods for hot spot identification (HSID). A subset of this considerable literature is dedicated to conducting performance assessments of various HSID methods. A central issue in comparing HSID methods is the development and selection of quantitative and qualitative performance measures or criteria. The authors contend that currently employed HSID assessment criteria—namely false positives and false negatives—are necessary but not sufficient, and additional criteria are needed to exploit the ordinal nature of site ranking data. With the intent to equip road safety professionals and researchers with more useful tools to compare the performances of various HSID methods and to improve the level of HSID assessments, this paper proposes four quantitative HSID evaluation tests that are, to the authors’ knowledge, new and unique. These tests evaluate different aspects of HSID method performance, including reliability of results, ranking consistency, and false identification consistency and reliability. It is intended that road safety professionals apply these different evaluation tests in addition to existing tests to compare the performances of various HSID methods, and then select the most appropriate HSID method to screen road networks to identify sites that require further analysis. This work demonstrates four new criteria using 3 years of Arizona road section accident data and four commonly applied HSID methods [accident frequency ranking, accident rate ranking, accident reduction potential, and empirical Bayes (EB)]. The EB HSID method reveals itself as the superior method in most of the evaluation tests. In contrast, identifying hot spots using accident rate rankings performs the least well among the tests. The accident frequency and accident reduction potential methods perform similarly, with slight differences explained. The authors believe that the four new evaluation tests offer insight into HSID performance heretofore unavailable to analysts and researchers.
Resumo:
Identifying crash “hotspots”, “blackspots”, “sites with promise”, or “high risk” locations is standard practice in departments of transportation throughout the US. The literature is replete with the development and discussion of statistical methods for hotspot identification (HSID). Theoretical derivations and empirical studies have been used to weigh the benefits of various HSID methods; however, a small number of studies have used controlled experiments to systematically assess various methods. Using experimentally derived simulated data—which are argued to be superior to empirical data, three hot spot identification methods observed in practice are evaluated: simple ranking, confidence interval, and Empirical Bayes. Using simulated data, sites with promise are known a priori, in contrast to empirical data where high risk sites are not known for certain. To conduct the evaluation, properties of observed crash data are used to generate simulated crash frequency distributions at hypothetical sites. A variety of factors is manipulated to simulate a host of ‘real world’ conditions. Various levels of confidence are explored, and false positives (identifying a safe site as high risk) and false negatives (identifying a high risk site as safe) are compared across methods. Finally, the effects of crash history duration in the three HSID approaches are assessed. The results illustrate that the Empirical Bayes technique significantly outperforms ranking and confidence interval techniques (with certain caveats). As found by others, false positives and negatives are inversely related. Three years of crash history appears, in general, to provide an appropriate crash history duration.
Resumo:
Cellular response to radiation damage is made by a complex network of pathways and feedback loops whose spatiotemporal organisation is still unclear despite its decisive role in determining the fate of the damaged cell. Revealing the dynamic sequence of the repair proteins is therefore critical in understanding how the DNA repair mechanisms work. There are also still open questions regarding the possible movement of damaged chromatin domains and its role as trigger for lesion recognition and signalling in the DNA repair context. The single-cell approach and the high spatial resolution offered by microbeams provide the perfect tool to study and quantify the dynamic processes associated with the induction and repair of DNA damage. We have followed the development of radiation-induced foci for three DNA damage markers (i.e. γ-H2AX, 53BP1 and hSSB1) using normal fibroblasts (AG01522), human breast adenocarcinoma cells (MCF7) and human fibrosarcoma cells (HT1080) stably transfected with yellow fluorescent protein fusion proteins following irradiation with the QUB X-ray microbeam (carbon X-rays <2 µm spot). The size and intensity of the foci has been analysed as a function of dose and time post-irradiation to investigate the dynamics of the above-mentioned DNA repair processes and monitor the remodelling of chromatin structure that the cell undergoes to deal with DNA damage.
Resumo:
Hot spot identification (HSID) plays a significant role in improving the safety of transportation networks. Numerous HSID methods have been proposed, developed, and evaluated in the literature. The vast majority of HSID methods reported and evaluated in the literature assume that crash data are complete, reliable, and accurate. Crash under-reporting, however, has long been recognized as a threat to the accuracy and completeness of historical traffic crash records. As a natural continuation of prior studies, the paper evaluates the influence that under-reported crashes exert on HSID methods. To conduct the evaluation, five groups of data gathered from Arizona Department of Transportation (ADOT) over the course of three years are adjusted to account for fifteen different assumed levels of under-reporting. Three identification methods are evaluated: simple ranking (SR), empirical Bayes (EB) and full Bayes (FB). Various threshold levels for establishing hotspots are explored. Finally, two evaluation criteria are compared across HSID methods. The results illustrate that the identification bias—the ability to correctly identify at risk sites--under-reporting is influenced by the degree of under-reporting. Comparatively speaking, crash under-reporting has the largest influence on the FB method and the least influence on the SR method. Additionally, the impact is positively related to the percentage of the under-reported PDO crashes and inversely related to the percentage of the under-reported injury crashes. This finding is significant because it reveals that despite PDO crashes being least severe and costly, they have the most significant influence on the accuracy of HSID.
Resumo:
We all live in a yellow submarine… When I go to work in the morning, in the office building that hosts our BPM research group, on the way up to our level I come by this big breakout room that hosts a number of computer scientists, working away at the next generation software algorithms and iPad applications (I assume). I have never actually been in that room, but every now and then the door is left ajar for a while and I can spot couches, lots (I mean, lots!) of monitors, the odd scientist, a number of Lara Croft posters, and the usual room equipment you’d probably expect from computer scientists (and, no, it’s not like that evil Dennis guy from the Jurassic Park movie, buried in chips, coke, and flickering code screens… It’s also not like the command room from the Nebuchadnezzar, Neo’s hovercraft in the Matrix movies, although I still strongly believe these green lines of code make a good screensaver).
Resumo:
In previous research (Chung et al., 2009), the potential of the continuous risk profile (CRP) to proactively detect the systematic deterioration of freeway safety levels was presented. In this paper, this potential is investigated further, and an algorithm is proposed for proactively detecting sites where the collision rate is not sufficiently high to be classified as a high collision concentration location but where a systematic deterioration of safety level is observed. The approach proposed compares the weighted CRP across different years and uses the cumulative sum (CUSUM) algorithm to detect the sites where changes in collision rate are observed. The CRPs of the detected sites are then compared for reproducibility. When high reproducibility is observed, a growth factor is used for sequential hypothesis testing to determine if the collision profiles are increasing over time. Findings from applying the proposed method using empirical data are documented in the paper together with a detailed description of the method.
Resumo:
A hospital consists of a number of wards, units and departments that provide a variety of medical services and interact on a day-to-day basis. Nearly every department within a hospital schedules patients for the operating theatre (OT) and most wards receive patients from the OT following post-operative recovery. Because of the interrelationships between units, disruptions and cancellations within the OT can have a flow-on effect to the rest of the hospital. This often results in dissatisfied patients, nurses and doctors, escalating waiting lists, inefficient resource usage and undesirable waiting times. The objective of this study is to use Operational Research methodologies to enhance the performance of the operating theatre by improving elective patient planning using robust scheduling and improving the overall responsiveness to emergency patients by solving the disruption management and rescheduling problem. OT scheduling considers two types of patients: elective and emergency. Elective patients are selected from a waiting list and scheduled in advance based on resource availability and a set of objectives. This type of scheduling is referred to as ‘offline scheduling’. Disruptions to this schedule can occur for various reasons including variations in length of treatment, equipment restrictions or breakdown, unforeseen delays and the arrival of emergency patients, which may compete for resources. Emergency patients consist of acute patients requiring surgical intervention or in-patients whose conditions have deteriorated. These may or may not be urgent and are triaged accordingly. Most hospitals reserve theatres for emergency cases, but when these or other resources are unavailable, disruptions to the elective schedule result, such as delays in surgery start time, elective surgery cancellations or transfers to another institution. Scheduling of emergency patients and the handling of schedule disruptions is an ‘online’ process typically handled by OT staff. This means that decisions are made ‘on the spot’ in a ‘real-time’ environment. There are three key stages to this study: (1) Analyse the performance of the operating theatre department using simulation. Simulation is used as a decision support tool and involves changing system parameters and elective scheduling policies and observing the effect on the system’s performance measures; (2) Improve viability of elective schedules making offline schedules more robust to differences between expected treatment times and actual treatment times, using robust scheduling techniques. This will improve the access to care and the responsiveness to emergency patients; (3) Address the disruption management and rescheduling problem (which incorporates emergency arrivals) using innovative robust reactive scheduling techniques. The robust schedule will form the baseline schedule for the online robust reactive scheduling model.
Resumo:
In recent years, car club and racing websites and forums have become an increasingly popular way for car enthusiasts to access racing and car club news, chat-rooms and message boards. However, no North American research has been found that has examined opinions and driving experiences of car and racing enthusiasts. The purpose of this study was to examine car club members’ opinions about and experiences with various aspects of driving, road safety and traffic legislation, with a particular focus on street racing. A web-based questionnaire (Survey Monkey) was developed using the expert panel method and was primarily based on validated instruments or questions that were developed from other surveys. The questionnaire included: 1) driver concerns regarding traffic safety issues and legislation; 2) attitudes regarding various driving activities; 3) leisure-time activities, including club activities; 4) driving experiences, including offences and collisions; and 5) socio-demographic questions. The survey was pre- tested and piloted. Electronic information letters were sent out to an identified list of car clubs and forums situated in southern Ontario. Car club participants were invited to fill out the questionnaire. This survey found that members of car clubs share similar concerns regarding various road safety issues with samples of Canadian drivers, although a smaller percentage of car club members are concerned about speeding-related driving. Car club members had varied opinions regarding Ontario’s Street Racers, Stunt and Aggressive Drivers Legislation. The respondents agreed the most with the new offences regarding not sitting in the driver’s seat, having a person in the trunk, or driving as close as possible to another vehicle, pedestrian or object on or near the highway without a reason. The majority disagreed with police powers of impoundment and on-the-spot licence suspensions. About three quarters of respondents reported no collisions or police stops for traffic offences in the past five years. Of those who had been stopped, the most common offence was reported as speeding. This study is the first in Canada to examine car club members’ opinions about and experiences with various aspects of driving, road safety and traffic legislation. Given the ubiquity of car clubs and fora in Canada, insights on members’ opinions and practices provide important information to road safety researchers.
Resumo:
A growing body of research is looking at ways to bring the processes and benefits of online deliberation to the places they are about and in turn allow a larger, targeted proportion of the urban public to have a voice, be heard, and engage in questions of city planning and design. Seeking to take advantage of the civic opportunities of situated engagement through public screens and mobile devices, our research informed a public urban screen content application DIS that we deployed and evaluated in a wide range of real world public and urban environments. For example, it is currently running on the renowned urban screen at Federation Square in Melbourne. We analysed the data from these user studies within a conceptual framework that positions situated engagement across three key parameters: people, content, and location. We propose a way to identify the sweet spot within the nexus of these parameters to help deploy and run interactive systems to maximise the quality of the situated engagement for civic and related deliberation purposes.