976 resultados para True Time
Resumo:
This book provides a general framework for specifying, estimating, and testing time series econometric models. Special emphasis is given to estimation by maximum likelihood, but other methods are also discussed, including quasi-maximum likelihood estimation, generalized method of moments estimation, nonparametric estimation, and estimation by simulation. An important advantage of adopting the principle of maximum likelihood as the unifying framework for the book is that many of the estimators and test statistics proposed in econometrics can be derived within a likelihood framework, thereby providing a coherent vehicle for understanding their properties and interrelationships. In contrast to many existing econometric textbooks, which deal mainly with the theoretical properties of estimators and test statistics through a theorem-proof presentation, this book squarely addresses implementation to provide direct conduits between the theory and applied work.
Resumo:
Exposure to ultrafine particles (UFPs) is deemed to be a major risk affecting human health. Therefore, airborne particle studies were performed in the recent years to evaluate the most critical micro-environments, as well as identifying the main UFP sources. Nonetheless, in order to properly evaluate the UFP exposure, personal monitoring is required as the only way to relate particle exposure levels to the activities performed and micro-environments visited. To this purpose, in the present work, the results of experimental analysis aimed at showing the effect of the time-activity patterns on UFP personal exposure are reported. In particular, 24 non-smoking couples (12 during winter and summer time, respectively), comprised of a man who worked full-time and a woman who was a homemaker, were analyzed using personal particle counter and GPS monitors. Each couple was investigated for a 48-h period, during which they also filled out a diary reporting the daily activities performed. Time activity patterns, particle number concentration exposure and the related dose received by the participants, in terms of particle alveolar-deposited surface area, were measured. The average exposure to particle number concentration was higher for women during both summer and winter (Summer: women 1.8×104 part. cm-3; men 9.2×103 part. cm-3; Winter: women 2.9×104 part. cm-3; men 1.3×104 part. cm-3), which was likely due to the time spent undertaking cooking activities. Staying indoors after cooking also led to higher alveolar-deposited surface area dose for both women and men during the winter time (9.12×102 and 6.33×102 mm2, respectively), when indoor ventilation was greatly reduced. The effect of cooking activities was also detected in terms of women’s dose intensity (dose per unit time), being 8.6 and 6.6 in winter and summer, respectively. On the contrary, the highest dose intensity activity for men was time spent using transportation (2.8 in both winter and summer).
Resumo:
Although transit travel time variability is essential for understanding the deterioration of reliability, optimising transit schedule and route choice; it has not attracted enough attention from the literature. This paper proposes public transport-oriented definitions of travel time variability and explores the distributions of public transport travel time using the Transit Signal Priority data. First, definitions of public transport travel time variability are established by extending the common definitions of variability in the literature and by using route and services data of public transport vehicles. Second, the paper explores the distribution of public transport travel time. A new approach for analysing the distributions involving all transit vehicles as well as vehicles from a specific route is proposed. The Lognormal distribution is revealed as the descriptors for public transport travel time from the same route and service. The methods described in this study could be of interest for both traffic managers and transit operators for planning and managing the transit systems.
Resumo:
Most studies examining the temperature–mortality association in a city used temperatures from one site or the average from a network of sites. This may cause measurement error as temperature varies across a city due to effects such as urban heat islands. We examined whether spatiotemporal models using spatially resolved temperatures produced different associations between temperature and mortality compared with time series models that used non-spatial temperatures. We obtained daily mortality data in 163 areas across Brisbane city, Australia from 2000 to 2004. We used ordinary kriging to interpolate spatial temperature variation across the city based on 19 monitoring sites. We used a spatiotemporal model to examine the impact of spatially resolved temperatures on mortality. Also, we used a time series model to examine non-spatial temperatures using a single site and the average temperature from three sites. We used squared Pearson scaled residuals to compare model fit. We found that kriged temperatures were consistent with observed temperatures. Spatiotemporal models using kriged temperature data yielded slightly better model fit than time series models using a single site or the average of three sites' data. Despite this better fit, spatiotemporal and time series models produced similar associations between temperature and mortality. In conclusion, time series models using non-spatial temperatures were equally good at estimating the city-wide association between temperature and mortality as spatiotemporal models.
Resumo:
Few studies have formally examined the relationship between meteorological factors and the incidence of child pneumonia in the tropics, despite the fact that most child pneumonia deaths occur there. We examined the association between four meteorological exposures (rainy days, sunshine, relative humidity, temperature) and the incidence of clinical pneumonia in young children in the Philippines using three time-series methods: correlation of seasonal patterns, distributed lag regression, and case-crossover. Lack of sunshine was most strongly associated with pneumonia in both lagged regression [overall relative risk over the following 60 days for a 1-h increase in sunshine per day was 0·67 (95% confidence interval (CI) 0·51–0·87)] and case-crossover analysis [odds ratio for a 1-h increase in mean daily sunshine 8–14 days earlier was 0·95 (95% CI 0·91–1·00)]. This association is well known in temperate settings but has not been noted previously in the tropics. Further research to assess causality is needed.
Resumo:
With an increased emphasis on genotyping of single nucleotide polymorphisms (SNPs) in disease association studies, the genotyping platform of choice is constantly evolving. In addition, the development of more specific SNP assays and appropriate genotype validation applications is becoming increasingly critical to elucidate ambiguous genotypes. In this study, we have used SNP specific Locked Nucleic Acid (LNA) hybridization probes on a real-time PCR platform to genotype an association cohort and propose three criteria to address ambiguous genotypes. Based on the kinetic properties of PCR amplification, the three criteria address PCR amplification efficiency, the net fluorescent difference between maximal and minimal fluorescent signals and the beginning of the exponential growth phase of the reaction. Initially observed SNP allelic discrimination curves were confirmed by DNA sequencing (n = 50) and application of our three genotype criteria corroborated both sequencing and observed real-time PCR results. In addition, the tested Caucasian association cohort was in Hardy-Weinberg equilibrium and observed allele frequencies were very similar to two independently tested Caucasian association cohorts for the same tested SNP. We present here a novel approach to effectively determine ambiguous genotypes generated from a real-time PCR platform. Application of our three novel criteria provides an easy to use semi-automated genotype confirmation protocol.
Resumo:
Travel time prediction has long been the topic of transportation research. But most relevant prediction models in the literature are limited to motorways. Travel time prediction on arterial networks is challenging due to involving traffic signals and significant variability of individual vehicle travel time. The limited availability of traffic data from arterial networks makes travel time prediction even more challenging. Recently, there has been significant interest of exploiting Bluetooth data for travel time estimation. This research analysed the real travel time data collected by the Brisbane City Council using the Bluetooth technology on arterials. Databases, including experienced average daily travel time are created and classified for approximately 8 months. Thereafter, based on data characteristics, Seasonal Auto Regressive Integrated Moving Average (SARIMA) modelling is applied on the database for short-term travel time prediction. The SARMIA model not only takes the previous continuous lags into account, but also uses the values from the same time of previous days for travel time prediction. This is carried out by defining a seasonality coefficient which improves the accuracy of travel time prediction in linear models. The accuracy, robustness and transferability of the model are evaluated through comparing the real and predicted values on three sites within Brisbane network. The results contain the detailed validation for different prediction horizons (5 min to 90 minutes). The model performance is evaluated mainly on congested periods and compared to the naive technique of considering the historical average.
Resumo:
Dwell time at the busway station has a significant effect on bus capacity and delay. Dwell time has conventionally been estimated using models developed on the basis of field survey data. However field survey is resource and cost intensive, so dwell time estimation based on limited observations can be somewhat inaccurate. Most public transport systems are now equipped with Automatic Passenger Count (APC) and/or Automatic Fare Collection (AFC) systems. AFC in particular reduces on-board ticketing time, driver’s work load and ultimately reduces bus dwell time. AFC systems can record all passenger transactions providing transit agencies with access to vast quantities of data. AFC data provides transaction timestamps, however this information differs from dwell time because passengers may tag on or tag off at times other than when doors open and close. This research effort contended that models could be developed to reliably estimate dwell time distributions when measured distributions of transaction times are known. Development of the models required calibration and validation using field survey data of actual dwell times, and an appreciation of another component of transaction time being bus time in queue. This research develops models for a peak period and off peak period at a busway station on the South East Busway (SEB) in Brisbane, Australia.
Resumo:
This study investigates travel behaviour and wait-time activities as a component of passenger satisfaction with public transport in Brisbane, Australia. Australian transport planners recognise a variety of benefits to encouraging a mode shift away from automobile travel in favour of active and public transport use. Efforts to increase public transport ridership have included introducing state of the art passenger information systems, improving physical station access, and integrating system pricing, routes and scheduling for train, bus and ferry. Previous research regarding satisfaction with public transport emphasizes technical dimensions of service quality, including the timing and reliability of service. Those factors might be especially significant for frequent (commuting) travellers who look to balance the cost and efficiency of their travel options. In contrast, infrequent (leisure) passengers may be more concerned with way finding and the sensory experience of the journey. Perhaps due to the small relative proportion of trips made by river ferry compared to bus and rail, this mode of public transport has not received as much attention in travel-behaviour research. This case study of Brisbane’s river ferry system examines ferry passengers at selected terminals during peak and off-peak travel times to find out how travel behaviours and activities correlate to satisfaction with ferry travel. Data include 416 questionnaires completed by passengers intercepted during wait times at seven CityCat terminals in Brisbane. Descriptive statistical analysis revealed associations between specific wait time activities and satisfaction levels that could inform planners seeking to increase ridership and quality of life through ferry-oriented development.
Resumo:
Although popular media narratives about the role of social media in driving the events of the 2011 “Arab Spring” are likely to overstate the impact of Facebook and Twitter on these uprisings, it is nonetheless true that protests and unrest in countries from Tunisia to Syria generated a substantial amount of social media activity. On Twitter alone, several millions of tweets containing the hashtags #libya or #egypt were generated during 2011, both by directly affected citizens of these countries and by onlookers from further afield. What remains unclear, though, is the extent to which there was any direct interaction between these two groups (especially considering potential language barriers between them). Building on hashtag data sets gathered between January and November 2011, this article compares patterns of Twitter usage during the popular revolution in Egypt and the civil war in Libya. Using custom-made tools for processing “big data,” we examine the volume of tweets sent by English-, Arabic-, and mixed-language Twitter users over time and examine the networks of interaction (variously through @replying, retweeting, or both) between these groups as they developed and shifted over the course of these uprisings. Examining @reply and retweet traffic, we identify general patterns of information flow between the English- and Arabic-speaking sides of the Twittersphere and highlight the roles played by users bridging both language spheres.
Resumo:
Streamciphers are common cryptographic algorithms used to protect the confidentiality of frame-based communications like mobile phone conversations and Internet traffic. Streamciphers are ideal cryptographic algorithms to encrypt these types of traffic as they have the potential to encrypt them quickly and securely, and have low error propagation. The main objective of this thesis is to determine whether structural features of keystream generators affect the security provided by stream ciphers.These structural features pertain to the state-update and output functions used in keystream generators. Using linear sequences as keystream to encrypt messages is known to be insecure. Modern keystream generators use nonlinear sequences as keystream.The nonlinearity can be introduced through a keystream generator's state-update function, output function, or both. The first contribution of this thesis relates to nonlinear sequences produced by the well-known Trivium stream cipher. Trivium is one of the stream ciphers selected in a final portfolio resulting from a multi-year project in Europe called the ecrypt project. Trivium's structural simplicity makes it a popular cipher to cryptanalyse, but to date, there are no attacks in the public literature which are faster than exhaustive keysearch. Algebraic analyses are performed on the Trivium stream cipher, which uses a nonlinear state-update and linear output function to produce keystream. Two algebraic investigations are performed: an examination of the sliding property in the initialisation process and algebraic analyses of Trivium-like streamciphers using a combination of the algebraic techniques previously applied separately by Berbain et al. and Raddum. For certain iterations of Trivium's state-update function, we examine the sets of slid pairs, looking particularly to form chains of slid pairs. No chains exist for a small number of iterations.This has implications for the period of keystreams produced by Trivium. Secondly, using our combination of the methods of Berbain et al. and Raddum, we analysed Trivium-like ciphers and improved on previous on previous analysis with regards to forming systems of equations on these ciphers. Using these new systems of equations, we were able to successfully recover the initial state of Bivium-A.The attack complexity for Bivium-B and Trivium were, however, worse than exhaustive keysearch. We also show that the selection of stages which are used as input to the output function and the size of registers which are used in the construction of the system of equations affect the success of the attack. The second contribution of this thesis is the examination of state convergence. State convergence is an undesirable characteristic in keystream generators for stream ciphers, as it implies that the effective session key size of the stream cipher is smaller than the designers intended. We identify methods which can be used to detect state convergence. As a case study, theMixer streamcipher, which uses nonlinear state-update and output functions to produce keystream, is analysed. Mixer is found to suffer from state convergence as the state-update function used in its initialisation process is not one-to-one. A discussion of several other streamciphers which are known to suffer from state convergence is given. From our analysis of these stream ciphers, three mechanisms which can cause state convergence are identified.The effect state convergence can have on stream cipher cryptanalysis is examined. We show that state convergence can have a positive effect if the goal of the attacker is to recover the initial state of the keystream generator. The third contribution of this thesis is the examination of the distributions of bit patterns in the sequences produced by nonlinear filter generators (NLFGs) and linearly filtered nonlinear feedback shift registers. We show that the selection of stages used as input to a keystream generator's output function can affect the distribution of bit patterns in sequences produced by these keystreamgenerators, and that the effect differs for nonlinear filter generators and linearly filtered nonlinear feedback shift registers. In the case of NLFGs, the keystream sequences produced when the output functions take inputs from consecutive register stages are less uniform than sequences produced by NLFGs whose output functions take inputs from unevenly spaced register stages. The opposite is true for keystream sequences produced by linearly filtered nonlinear feedback shift registers.
Resumo:
The Queensland Court of Appeal recently handed down its decision in Caprice Property Holdings Pty Ltd v McLeay [2013] QCA 120. The decision considers the operation of the standard REIQ contract for the sale of land as it impacts on the time for settlement and the respective obligations of the buyer and the seller. The decision highlights both practical and legal issues arising from a failure to render performance at the stipulated time...
Resumo:
Currently, finite element analyses are usually done by means of commercial software tools. Accuracy of analysis and computational time are two important factors in efficiency of these tools. This paper studies the effective parameters in computational time and accuracy of finite element analyses performed by ANSYS and provides the guidelines for the users of this software whenever they us this software for study on deformation of orthopedic bone plates or study on similar cases. It is not a fundamental scientific study and only shares the findings of the authors about structural analysis by means of ANSYS workbench. It gives an idea to the readers about improving the performance of the software and avoiding the traps. The solutions provided in this paper are not the only possible solutions of the problems and in similar cases there are other solutions which are not given in this paper. The parameters of solution method, material model, geometric model, mesh configuration, number of the analysis steps, program controlled parameters and computer settings are discussed through thoroughly in this paper.
Resumo:
Recent road safety statistics show that the decades-long fatalities decreasing trend is stopping and stagnating. Statistics further show that crashes are mostly driven by human error, compared to other factors such as environmental conditions and mechanical defects. Within human error, the dominant error source is perceptive errors, which represent about 50% of the total. The next two sources are interpretation and evaluation, which accounts together with perception for more than 75% of human error related crashes. Those statistics show that allowing drivers to perceive and understand their environment better, or supplement them when they are clearly at fault, is a solution to a good assessment of road risk, and, as a consequence, further decreasing fatalities. To answer this problem, currently deployed driving assistance systems combine more and more information from diverse sources (sensors) to enhance the driver's perception of their environment. However, because of inherent limitations in range and field of view, these systems' perception of their environment remains largely limited to a small interest zone around a single vehicle. Such limitations can be overcomed by increasing the interest zone through a cooperative process. Cooperative Systems (CS), a specific subset of Intelligent Transportation Systems (ITS), aim at compensating for local systems' limitations by associating embedded information technology and intervehicular communication technology (IVC). With CS, information sources are not limited to a single vehicle anymore. From this distribution arises the concept of extended or augmented perception. Augmented perception allows extending an actor's perceptive horizon beyond its "natural" limits not only by fusing information from multiple in-vehicle sensors but also information obtained from remote sensors. The end result of an augmented perception and data fusion chain is known as an augmented map. It is a repository where any relevant information about objects in the environment, and the environment itself, can be stored in a layered architecture. This thesis aims at demonstrating that augmented perception has better performance than noncooperative approaches, and that it can be used to successfully identify road risk. We found it was necessary to evaluate the performance of augmented perception, in order to obtain a better knowledge on their limitations. Indeed, while many promising results have already been obtained, the feasibility of building an augmented map from exchanged local perception information and, then, using this information beneficially for road users, has not been thoroughly assessed yet. The limitations of augmented perception, and underlying technologies, have not be thoroughly assessed yet. Most notably, many questions remain unanswered as to the IVC performance and their ability to deliver appropriate quality of service to support life-saving critical systems. This is especially true as the road environment is a complex, highly variable setting where many sources of imperfections and errors exist, not only limited to IVC. We provide at first a discussion on these limitations and a performance model built to incorporate them, created from empirical data collected on test tracks. Our results are more pessimistic than existing literature, suggesting IVC limitations have been underestimated. Then, we develop a new CS-applications simulation architecture. This architecture is used to obtain new results on the safety benefits of a cooperative safety application (EEBL), and then to support further study on augmented perception. At first, we confirm earlier results in terms of crashes numbers decrease, but raise doubts on benefits in terms of crashes' severity. In the next step, we implement an augmented perception architecture tasked with creating an augmented map. Our approach is aimed at providing a generalist architecture that can use many different types of sensors to create the map, and which is not limited to any specific application. The data association problem is tackled with an MHT approach based on the Belief Theory. Then, augmented and single-vehicle perceptions are compared in a reference driving scenario for risk assessment,taking into account the IVC limitations obtained earlier; we show their impact on the augmented map's performance. Our results show that augmented perception performs better than non-cooperative approaches, allowing to almost tripling the advance warning time before a crash. IVC limitations appear to have no significant effect on the previous performance, although this might be valid only for our specific scenario. Eventually, we propose a new approach using augmented perception to identify road risk through a surrogate: near-miss events. A CS-based approach is designed and validated to detect near-miss events, and then compared to a non-cooperative approach based on vehicles equiped with local sensors only. The cooperative approach shows a significant improvement in the number of events that can be detected, especially at the higher rates of system's deployment.
Resumo:
Introduction Road safety researchers rely heavily on self-report data to explore the aetiology of crash risk. However, researchers consistently acknowledge a range of limitations associated with this methodological approach (e.g., self-report bias), which has been hypothesised to reduce the predictive efficacy of scales. Although well researched in other areas, one important factor often neglected in road safety studies is the fallibility of human memory. Given accurate recall is a key assumption in many studies, the validity and consistency of self-report data warrants investigation. The aim of the current study was to examine the consistency of self-report data of crash history and details of the most recent reported crash on two separate occasions. Materials & Method A repeated measures design was utilised to examine the self-reported crash involvement history of 214 general motorists over a two month period. Results A number of interesting discrepancies were noted in relation to number of lifetime crashes reported by the participants and the descriptions of their most recent crash across the two occasions. Of the 214 participants who reported having been involved in a crash, 35 (22.3%) reported a lower number of lifetime crashes as Time 2, than at Time 1. Of the 88 drivers who reported no change in number of lifetime crashes, 10 (11.4%) described a different most recent crash. Additionally, of the 34 reporting an increase in the number of lifetime crashes, 29 (85.3%) of these described the same crash on both occasions. Assessed as a whole, at least 47.1% of participants made a confirmed mistake at Time 1 or Time 2. Conclusions These results raise some doubt in regard to the accuracy of memory recall across time. Given that self-reported crash involvement is the predominant dependent variable used in the majority of road safety research, this issue warrants further investigation. Replication of the study with a larger sample size that includes multiple recall periods would enhance understanding into the significance of this issue for road safety methodology.