278 resultados para square resonators


Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis deals with the problem of the instantaneous frequency (IF) estimation of sinusoidal signals. This topic plays significant role in signal processing and communications. Depending on the type of the signal, two major approaches are considered. For IF estimation of single-tone or digitally-modulated sinusoidal signals (like frequency shift keying signals) the approach of digital phase-locked loops (DPLLs) is considered, and this is Part-I of this thesis. For FM signals the approach of time-frequency analysis is considered, and this is Part-II of the thesis. In part-I we have utilized sinusoidal DPLLs with non-uniform sampling scheme as this type is widely used in communication systems. The digital tanlock loop (DTL) has introduced significant advantages over other existing DPLLs. In the last 10 years many efforts have been made to improve DTL performance. However, this loop and all of its modifications utilizes Hilbert transformer (HT) to produce a signal-independent 90-degree phase-shifted version of the input signal. Hilbert transformer can be realized approximately using a finite impulse response (FIR) digital filter. This realization introduces further complexity in the loop in addition to approximations and frequency limitations on the input signal. We have tried to avoid practical difficulties associated with the conventional tanlock scheme while keeping its advantages. A time-delay is utilized in the tanlock scheme of DTL to produce a signal-dependent phase shift. This gave rise to the time-delay digital tanlock loop (TDTL). Fixed point theorems are used to analyze the behavior of the new loop. As such TDTL combines the two major approaches in DPLLs: the non-linear approach of sinusoidal DPLL based on fixed point analysis, and the linear tanlock approach based on the arctan phase detection. TDTL preserves the main advantages of the DTL despite its reduced structure. An application of TDTL in FSK demodulation is also considered. This idea of replacing HT by a time-delay may be of interest in other signal processing systems. Hence we have analyzed and compared the behaviors of the HT and the time-delay in the presence of additive Gaussian noise. Based on the above analysis, the behavior of the first and second-order TDTLs has been analyzed in additive Gaussian noise. Since DPLLs need time for locking, they are normally not efficient in tracking the continuously changing frequencies of non-stationary signals, i.e. signals with time-varying spectra. Nonstationary signals are of importance in synthetic and real life applications. An example is the frequency-modulated (FM) signals widely used in communication systems. Part-II of this thesis is dedicated for the IF estimation of non-stationary signals. For such signals the classical spectral techniques break down, due to the time-varying nature of their spectra, and more advanced techniques should be utilized. For the purpose of instantaneous frequency estimation of non-stationary signals there are two major approaches: parametric and non-parametric. We chose the non-parametric approach which is based on time-frequency analysis. This approach is computationally less expensive and more effective in dealing with multicomponent signals, which are the main aim of this part of the thesis. A time-frequency distribution (TFD) of a signal is a two-dimensional transformation of the signal to the time-frequency domain. Multicomponent signals can be identified by multiple energy peaks in the time-frequency domain. Many real life and synthetic signals are of multicomponent nature and there is little in the literature concerning IF estimation of such signals. This is why we have concentrated on multicomponent signals in Part-H. An adaptive algorithm for IF estimation using the quadratic time-frequency distributions has been analyzed. A class of time-frequency distributions that are more suitable for this purpose has been proposed. The kernels of this class are time-only or one-dimensional, rather than the time-lag (two-dimensional) kernels. Hence this class has been named as the T -class. If the parameters of these TFDs are properly chosen, they are more efficient than the existing fixed-kernel TFDs in terms of resolution (energy concentration around the IF) and artifacts reduction. The T-distributions has been used in the IF adaptive algorithm and proved to be efficient in tracking rapidly changing frequencies. They also enables direct amplitude estimation for the components of a multicomponent

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background/Rationale Guided by the need-driven dementia-compromised behavior (NDB) model, this study examined influences of the physical environment on wandering behavior. Methods Using a descriptive, cross-sectional design, 122 wanderers from 28 long-term care (LTC) facilities were videotaped 10 to 12 times; data on wandering, light, sound, temperature and humidity levels, location, ambiance, and crowding were obtained. Associations between environmental variables and wandering were evaluated with chi-square and t tests; the model was evaluated using logistic regression. Results In all, 80% of wandering occurred in the resident’s own room, dayrooms, hallways, or dining rooms. When observed in other residents’ rooms, hallways, shower/baths, or off-unit locations, wanderers were likely (60%-92% of observations) to wander. The data were a good fit to the model overall (LR [logistic regression] χ2 (5) = 50.38, P < .0001) and by wandering type. Conclusions Location, light, sound, proximity of others, and ambiance are associated with wandering and may serve to inform environmental designs and care practices.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Statistical modeling of traffic crashes has been of interest to researchers for decades. Over the most recent decade many crash models have accounted for extra-variation in crash counts—variation over and above that accounted for by the Poisson density. The extra-variation – or dispersion – is theorized to capture unaccounted for variation in crashes across sites. The majority of studies have assumed fixed dispersion parameters in over-dispersed crash models—tantamount to assuming that unaccounted for variation is proportional to the expected crash count. Miaou and Lord [Miaou, S.P., Lord, D., 2003. Modeling traffic crash-flow relationships for intersections: dispersion parameter, functional form, and Bayes versus empirical Bayes methods. Transport. Res. Rec. 1840, 31–40] challenged the fixed dispersion parameter assumption, and examined various dispersion parameter relationships when modeling urban signalized intersection accidents in Toronto. They suggested that further work is needed to determine the appropriateness of the findings for rural as well as other intersection types, to corroborate their findings, and to explore alternative dispersion functions. This study builds upon the work of Miaou and Lord, with exploration of additional dispersion functions, the use of an independent data set, and presents an opportunity to corroborate their findings. Data from Georgia are used in this study. A Bayesian modeling approach with non-informative priors is adopted, using sampling-based estimation via Markov Chain Monte Carlo (MCMC) and the Gibbs sampler. A total of eight model specifications were developed; four of them employed traffic flows as explanatory factors in mean structure while the remainder of them included geometric factors in addition to major and minor road traffic flows. The models were compared and contrasted using the significance of coefficients, standard deviance, chi-square goodness-of-fit, and deviance information criteria (DIC) statistics. The findings indicate that the modeling of the dispersion parameter, which essentially explains the extra-variance structure, depends greatly on how the mean structure is modeled. In the presence of a well-defined mean function, the extra-variance structure generally becomes insignificant, i.e. the variance structure is a simple function of the mean. It appears that extra-variation is a function of covariates when the mean structure (expected crash count) is poorly specified and suffers from omitted variables. In contrast, when sufficient explanatory variables are used to model the mean (expected crash count), extra-Poisson variation is not significantly related to these variables. If these results are generalizable, they suggest that model specification may be improved by testing extra-variation functions for significance. They also suggest that known influences of expected crash counts are likely to be different than factors that might help to explain unaccounted for variation in crashes across sites

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Speeding is recognized as a major contributing factor in traffic crashes. In order to reduce speed-related crashes, the city of Scottsdale, Arizona implemented the first fixed-camera photo speed enforcement program (SEP) on a limited access freeway in the US. The 9-month demonstration program spanning from January 2006 to October 2006 was implemented on a 6.5 mile urban freeway segment of Arizona State Route 101 running through Scottsdale. This paper presents the results of a comprehensive analysis of the impact of the SEP on speeding behavior, crashes, and the economic impact of crashes. The impact on speeding behavior was estimated using generalized least square estimation, in which the observed speeds and the speeding frequencies during the program period were compared to those during other periods. The impact of the SEP on crashes was estimated using 3 evaluation methods: a before-and-after (BA) analysis using a comparison group, a BA analysis with traffic flow correction, and an empirical Bayes BA analysis with time-variant safety. The analysis results reveal that speeding detection frequencies (speeds> or =76 mph) increased by a factor of 10.5 after the SEP was (temporarily) terminated. Average speeds in the enforcement zone were reduced by about 9 mph when the SEP was implemented, after accounting for the influence of traffic flow. All crash types were reduced except rear-end crashes, although the estimated magnitude of impact varies across estimation methods (and their corresponding assumptions). When considering Arizona-specific crash related injury costs, the SEP is estimated to yield about $17 million in annual safety benefits.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Purpose: To investigate the influence of keratoconus on peripheral ocular aberrations. Methods: Aberrations of 7 mild and 5 moderate keratoconics were determined over a 42°horizontal x 32° vertical visual field with a modified COAS-HD aberrometer. Control data were obtained from an emmetropic group. Results: Most aberrations in keratoconics showed field dependence predominately along the vertical meridian. Mean spherical equivalent M, oblique astigmatism J45 and regular astigmatism J180 refraction components and total root mean square aberrations (excluding defocus) had high magnitudes in the inferior visual field. The rates of change of aberrations were higher in moderate than in mild keratoconics. Coma was the dominant peripheral higher-order aberration in both emmetropes and keratoconics; for the latter it had high magnitudes in the centre and periphery of the visual field. Conclusion: Greater rates of change of aberrations across the visual field occurred for the keratoconic groups than for the emmetropic control group. Moderate keratoconics had more rapid changes in, and higher magnitudes of aberrations across the visual field than mild keratoconics. The dominant higher-order aberration for the keratoconics across the visual field was vertical coma.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

OBJECTIVE To examine the psychometric properties of a Chinese version of the Problem Areas In Diabetes (PAID-C) scale. RESEARCH DESIGN AND METHODS The reliability and validity of the PAID-C were evaluated in a convenience sample of 205 outpatients with type 2 diabetes. Confirmatory factor analysis, Bland-Altman analysis, and Spearman's correlations facilitated the psychometric evaluation. RESULTS Confirmatory factor analysis confirmed a one-factor structure of the PAID-C (χ2/df ratio = 1.894, goodness-of-fit index = 0.901, comparative fit index = 0.905, root mean square error of approximation = 0.066). The PAID-C was associated with A1C (rs = 0.15; P < 0.05) and diabetes self-care behaviors in general diet (rs = −0.17; P < 0.05) and exercise (rs = −0.17; P < 0.05). The 4-week test-retest reliability demonstrated satisfactory stability (rs = 0.83; P < 0.01). CONCLUSIONS The PAID-C is a reliable and valid measure to determine diabetes-related emotional distress in Chinese people with type 2 diabetes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Sections contributed by Jean Sim Agricultural Colleges; p.12 Anzac Park, Townsvile; p.22 Anzac Square, Brisbane; pp.22-23 Benson, Albert Herbert; p.86 Bick, Edward Walter; p.88 Bougainvillea Gardens; p.101 Bowen Park; pp.101-102 Boyd, A.J.; p.103 Brisbane Botanic Gardens; pp.104-105 Bush-house; pp.119-121

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Aim. To explore and compare older home care clients’ (65+) and their professionals’ perceptions of the clients’ psychological well-being and care and to identify possible differences in these perceptions. Background. Psychological well-being is considered an important dimension of quality of life. With advancing age, older people require home care support to be able to remain in their own home. The main goal of care is to maximise their independence and quality of life. Design. Descriptive, survey design with questionnaire. Methods. A postal questionnaire was distributed to 200 older home care clients and 570 social and health care professionals in 2007. The total response rate was 63%. The questionnaire consisted of questions about clients’ psychological well-being and the provision of care by home care professionals. The differences in responses between clients and professionals were analysed using cross-tabulations, the Pearson Chi-Square Test and Fisher’s Exact Tests. Results. The professional group believed that their clients did not have plans for the future. They believed that their clients felt themselves depressed and suffering from loneliness significantly more often than the client group did. The client group were also significantly more critical of the care (motivating independent actions, physical, psychological and social care) they got from the professional group than how the professionals evaluated the care they gave. Conclusions. To be able to support older clients to continue living at home, professionals need to provide a service that meets client’s own perceptions and complex social and health care needs as well as personal sense of well-being. Relevance to clinical practice. The findings offer useful insights for the professional in planning and delivering appropriate home care services. A better understanding of differences between clients’ and professionals’ perceptions could lead to a better individualised care outcome.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents techniques which can be viewed as pre-processing step towards diagnosis of faults in a small size multi-cylinder diesel engine. Preliminary analysis of the acoustic emission (AE) signals is outlined, including time-frequency analysis, selection of optimum frequency band. Some results of applying mean field independent component analysis (MFICA) to separate the AE root mean square (RMS) signals are also outlined. The results on separation of RMS signals show this technique has the potential of increasing the probability to successfully identify the AE events associated with the various mechanical events.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The 1990 European Community was taken by surprise, by the urgency of demands from the newly-elected Eastern European governments to become member countries. Those governments were honouring the mass social movement of the streets, the year before, demanding free elections and a liberal economic system associated with “Europe”. The mass movement had actually been accompanied by much activity within institutional politics, in Western Europe, the former “satellite” states, the Soviet Union and the United States, to set up new structures – with German reunification and an expanded EC as the centre-piece. This paper draws on the writer’s doctoral dissertation on mass media in the collapse of the Eastern bloc, focused on the Berlin Wall – documenting both public protests and institutional negotiations. For example the writer as a correspondent in Europe from that time, recounts interventions of the German Chancellor, Helmut Kohl, at a European summit in Paris nine days after the “Wall”, and separate negotiations with the French President, Francois Mitterrand -- on the reunification, and EU monetary union after 1992. Through such processes, the “European idea” would receive fresh impetus, though the EU which eventuated, came with many altered expectations. It is argued here that as a result of the shock of 1989, a “social” Europe can be seen emerging, as a shared experience of daily life -- especially among people born during the last two decades of European consolidation. The paper draws on the author’s major research, in four parts: (1) Field observation from the strategic vantage point of a news correspondent. This includes a treatment of evidence at the time, of the wishes and intentions of the mass public (including the unexpected drive to join the European Community), and those of governments, (e.g. thoughts of a “Tienanmen Square solution” in East Berlin, versus the non-intervention policies of the Soviet leader, Mikhail Gorbachev). (2) A review of coverage of the crisis of 1989 by major news media outlets, treated as a history of the process. (3) As a comparison, and a test of accuracy and analysis; a review of conventional histories of the crisis appearing a decade later.(4) A further review, and test, provided by journalists responsible for the coverage of the time, as reflection on practice – obtained from semi-structured interviews.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The present rate of technological advance continues to place significant demands on data storage devices. The sheer amount of digital data being generated each year along with consumer expectations, fuels these demands. At present, most digital data is stored magnetically, in the form of hard disk drives or on magnetic tape. The increase in areal density (AD) of magnetic hard disk drives over the past 50 years has been of the order of 100 million times, and current devices are storing data at ADs of the order of hundreds of gigabits per square inch. However, it has been known for some time that the progress in this form of data storage is approaching fundamental limits. The main limitation relates to the lower size limit that an individual bit can have for stable storage. Various techniques for overcoming these fundamental limits are currently the focus of considerable research effort. Most attempt to improve current data storage methods, or modify these slightly for higher density storage. Alternatively, three dimensional optical data storage is a promising field for the information storage needs of the future, offering very high density, high speed memory. There are two ways in which data may be recorded in a three dimensional optical medium; either bit-by-bit (similar in principle to an optical disc medium such as CD or DVD) or by using pages of bit data. Bit-by-bit techniques for three dimensional storage offer high density but are inherently slow due to the serial nature of data access. Page-based techniques, where a two-dimensional page of data bits is written in one write operation, can offer significantly higher data rates, due to their parallel nature. Holographic Data Storage (HDS) is one such page-oriented optical memory technique. This field of research has been active for several decades, but with few commercial products presently available. Another page-oriented optical memory technique involves recording pages of data as phase masks in a photorefractive medium. A photorefractive material is one by which the refractive index can be modified by light of the appropriate wavelength and intensity, and this property can be used to store information in these materials. In phase mask storage, two dimensional pages of data are recorded into a photorefractive crystal, as refractive index changes in the medium. A low-intensity readout beam propagating through the medium will have its intensity profile modified by these refractive index changes and a CCD camera can be used to monitor the readout beam, and thus read the stored data. The main aim of this research was to investigate data storage using phase masks in the photorefractive crystal, lithium niobate (LiNbO3). Firstly the experimental methods for storing the two dimensional pages of data (a set of vertical stripes of varying lengths) in the medium are presented. The laser beam used for writing, whose intensity profile is modified by an amplitudemask which contains a pattern of the information to be stored, illuminates the lithium niobate crystal and the photorefractive effect causes the patterns to be stored as refractive index changes in the medium. These patterns are read out non-destructively using a low intensity probe beam and a CCD camera. A common complication of information storage in photorefractive crystals is the issue of destructive readout. This is a problem particularly for holographic data storage, where the readout beam should be at the same wavelength as the beam used for writing. Since the charge carriers in the medium are still sensitive to the read light field, the readout beam erases the stored information. A method to avoid this is by using thermal fixing. Here the photorefractive medium is heated to temperatures above 150�C; this process forms an ionic grating in the medium. This ionic grating is insensitive to the readout beam and therefore the information is not erased during readout. A non-contact method for determining temperature change in a lithium niobate crystal is presented in this thesis. The temperature-dependent birefringent properties of the medium cause intensity oscillations to be observed for a beam propagating through the medium during a change in temperature. It is shown that each oscillation corresponds to a particular temperature change, and by counting the number of oscillations observed, the temperature change of the medium can be deduced. The presented technique for measuring temperature change could easily be applied to a situation where thermal fixing of data in a photorefractive medium is required. Furthermore, by using an expanded beam and monitoring the intensity oscillations over a wide region, it is shown that the temperature in various locations of the crystal can be monitored simultaneously. This technique could be used to deduce temperature gradients in the medium. It is shown that the three dimensional nature of the recording medium causes interesting degradation effects to occur when the patterns are written for a longer-than-optimal time. This degradation results in the splitting of the vertical stripes in the data pattern, and for long writing exposure times this process can result in the complete deterioration of the information in the medium. It is shown in that simply by using incoherent illumination, the original pattern can be recovered from the degraded state. The reason for the recovery is that the refractive index changes causing the degradation are of a smaller magnitude since they are induced by the write field components scattered from the written structures. During incoherent erasure, the lower magnitude refractive index changes are neutralised first, allowing the original pattern to be recovered. The degradation process is shown to be reversed during the recovery process, and a simple relationship is found relating the time at which particular features appear during degradation and recovery. A further outcome of this work is that the minimum stripe width of 30 ìm is required for accurate storage and recovery of the information in the medium, any size smaller than this results in incomplete recovery. The degradation and recovery process could be applied to an application in image scrambling or cryptography for optical information storage. A two dimensional numerical model based on the finite-difference beam propagation method (FD-BPM) is presented and used to gain insight into the pattern storage process. The model shows that the degradation of the patterns is due to the complicated path taken by the write beam as it propagates through the crystal, and in particular the scattering of this beam from the induced refractive index structures in the medium. The model indicates that the highest quality pattern storage would be achieved with a thin 0.5 mm medium; however this type of medium would also remove the degradation property of the patterns and the subsequent recovery process. To overcome the simplistic treatment of the refractive index change in the FD-BPM model, a fully three dimensional photorefractive model developed by Devaux is presented. This model shows significant insight into the pattern storage, particularly for the degradation and recovery process, and confirms the theory that the recovery of the degraded patterns is possible since the refractive index changes responsible for the degradation are of a smaller magnitude. Finally, detailed analysis of the pattern formation and degradation dynamics for periodic patterns of various periodicities is presented. It is shown that stripe widths in the write beam of greater than 150 ìm result in the formation of different types of refractive index changes, compared with the stripes of smaller widths. As a result, it is shown that the pattern storage method discussed in this thesis has an upper feature size limit of 150 ìm, for accurate and reliable pattern storage.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This article introduces a “pseudo classical” notion of modelling non-separability. This form of non-separability can be viewed as lying between separability and quantum-like non-separability. Non-separability is formalized in terms of the non-factorizabilty of the underlying joint probability distribution. A decision criterium for determining the non-factorizability of the joint distribution is related to determining the rank of a matrix as well as another approach based on the chi-square-goodness-of-fit test. This pseudo-classical notion of non-separability is discussed in terms of quantum games and concept combinations in human cognition.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper I examine the way artists with disabilities use performances in public spaces to encourage people to reflect on their own contribution to the construction of publics, or counter-publics, during and after the moment of encounter. I focus on Liz Crow’s Resistance on the Plinth. This is the title Crow gave her performance on the Forth Plinth in Trafalgar Square as part of Antony Gormley’s One and Other project in 2009. Described as a public art project presenting a portrait of the U.K., Gormley’s One and Other gave 2400 people selected at random via a lottery the chance to do whatever they chose for an hour on the vacant Forth Plinth in Trafalgar Square. In her hour, Crow chose to present herself in a Nazi uniform, in her wheelchair. In this paper, I discuss how Crow’s performance used a counterposition of images – the Nazi uniform, associated with eugenics and a desire to eliminate people who do not accord with the Arayan ‘norm’, counterposed with her wheelchair – to encourage passersby to “stop, look, think.” I examine how Crow used this counterposition to prevent passersby from attributing a single, stable, monologic meaning to the image – forestalling the risk that passersby would read the image as a Nazi on the Plinth – and instead draw passersby into a dialogue in which the impossibility of reconciling the contradictory images, ideologies and cultural logics Crow embodied encouraged people to continue thinking and talking about these cultural logics during and after the encounter.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Aims: Driving Under the Influence (DUI) enforcement can be a broad screening mechanism for alcohol and other drug problems. The current response to DUI is focused on using mechanical means to prevent inebriated persons from driving, with little attention the underlying substance abuse problems. ---------- Methods: This is a secondary analysis of an administrative dataset of over 345,000 individuals who entered Texas substance abuse treatment between 2005 and 2008. Of these, 36,372 were either on DUI probation, referred to treatment by probation, or had a DUI arrest in the past year. The DUI offenders were compared on demographic characteristics, substance use patterns, and levels of impairment with those who were not DUI offenders and first DUI offenders were compared with those with more than one past-year offense. T tests and chi square tests were used to determine significance. ---------- Results: DUI offenders were more likely to be employed, to have a problem with alcohol, to report more past-year arrests for any offense, to be older, and to have used alcohol and drugs longer than the non-DUI clients who reported higher ASI scores and were more likely to use daily. Those with one past-year DUI arrest were more likely to have problems with drugs other than alcohol and were less impaired than those with two or more arrests based on their ASI scores and daily use. Non-DUI clients reported higher levels of mood disorders than DUIs but there was no difference in their diagnosis of anxiety. Similar findings were found between those with one or multiple DUI arrests. ----------Conclusion: Although first-time DUIs were not as impaired as non-DUI clients, their levels of impairment were sufficient to cause treatment. Screening and brief intervention at arrest for all DUI offenders and treatment in combination with abstinence monitoring could decrease future recidivism.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Purpose: Graduated driver licensing (GDL) has been introduced in numerous jurisdictions in Australia and internationally in an attempt to ameliorate the significantly greater risk of death and injury for young novice drivers arising from road crashes. The GDL program in Queensland, Australia, was extensively modified in July 2007. This paper reports the driving and licensing experiences of Learner drivers progressing through the current-GDL program, and compares them to the experiences of Learners who progressed through the former-GDL program. ----- ----- Method: Young drivers (n = 1032, 609 females, 423 males) aged 17 to 19 years (M = 17.43, SD = 0.67) were recruited as they progressed from a Learner to a Provisional driver’s licence. They completed a survey exploring their sociodemographic characteristics, driving and licensing experiences as a Learner. Key measures for a subsample (n = 183) of the current-GDL drivers were compared with the former-GDL drivers (n = 149) via t-tests and chi-square analyses. ----- ----- Results: As expected, Learner drivers progressing through the current-GDL program gained significantly more driving practice than those in the former program, which was more likely to be provided by mothers than in the past. Female learners in the current-GDL program reported less difficulty obtaining supervision than those in the former program. The number of attempts needed to pass the practical driving assessment did not change, nor did the amount of professional supervision. The current-GDL Learners held their licence for a significantly longer duration than those in the former program, with the majority reporting that their Logbook entries were accurate on the whole. Compared to those in the former program, a significantly smaller proportion of male current-GDL Learners reported being detected for a driving offence while the females reported significantly lower crash involvement. Most current-GDL drivers reported undertaking their supervised practice at the end of the Learner period. ----- ----- Conclusions: The enhancements to the GDL program in Queensland appear to have achieved many of their intended results. The current-GDL learners participating in the study reported obtaining a significantly greater amount of supervised driving experience compared to former-GDL learners. Encouragingly, the current-GDL Learners did not report any greater difficulty in obtaining supervised driving practice, and there was a decline in the proportion of current-GDL Learners engaging in unsupervised driving. In addition, the majority of Learners do not appear to be attempting to subvert logbook recording requirements, as evidenced by high rates of self-reported logbook accuracy. The results have implications for the development and the evaluation of GDL programs in Australia and around the world.