906 resultados para EQUATION-ERROR MODELS


Relevância:

20.00% 20.00%

Publicador:

Resumo:

We evaluate the performance of several specification tests for Markov regime-switching time-series models. We consider the Lagrange multiplier (LM) and dynamic specification tests of Hamilton (1996) and Ljung–Box tests based on both the generalized residual and a standard-normal residual constructed using the Rosenblatt transformation. The size and power of the tests are studied using Monte Carlo experiments. We find that the LM tests have the best size and power properties. The Ljung–Box tests exhibit slight size distortions, though tests based on the Rosenblatt transformation perform better than the generalized residual-based tests. The tests exhibit impressive power to detect both autocorrelation and autoregressive conditional heteroscedasticity (ARCH). The tests are illustrated with a Markov-switching generalized ARCH (GARCH) model fitted to the US dollar–British pound exchange rate, with the finding that both autocorrelation and GARCH effects are needed to adequately fit the data.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Effective management of groundwater requires stakeholders to have a realistic conceptual understanding of the groundwater systems and hydrological processes.However, groundwater data can be complex, confusing and often difficult for people to comprehend..A powerful way to communicate understanding of groundwater processes, complex subsurface geology and their relationships is through the use of visualisation techniques to create 3D conceptual groundwater models. In addition, the ability to animate, interrogate and interact with 3D models can encourage a higher level of understanding than static images alone. While there are increasing numbers of software tools available for developing and visualising groundwater conceptual models, these packages are often very expensive and are not readily accessible to majority people due to complexity. .The Groundwater Visualisation System (GVS) is a software framework that can be used to develop groundwater visualisation tools aimed specifically at non-technical computer users and those who are not groundwater domain experts. A primary aim of GVS is to provide management support for agencies, and enhancecommunity understanding.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Purpose: All currently considered parametric models used for decomposing videokeratoscopy height data are viewercentered and hence describe what the operator sees rather than what the surface is. The purpose of this study was to ascertain the applicability of an object-centered representation to modeling of corneal surfaces. Methods: A three-dimensional surface decomposition into a series of spherical harmonics is considered and compared with the traditional Zernike polynomial expansion for a range of videokeratoscopic height data. Results: Spherical harmonic decomposition led to significantly better fits to corneal surfaces (in terms of the root mean square error values) than the corresponding Zernike polynomial expansions with the same number of coefficients, for all considered corneal surfaces, corneal diameters, and model orders. Conclusions: Spherical harmonic decomposition is a viable alternative to Zernike polynomial decomposition. It achieves better fits to videokeratoscopic height data and has the advantage of an object-centered representation that could be particularly suited to the analysis of multiple corneal measurements.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The refractive error of a human eye varies across the pupil and therefore may be treated as a random variable. The probability distribution of this random variable provides a means for assessing the main refractive properties of the eye without the necessity of traditional functional representation of wavefront aberrations. To demonstrate this approach, the statistical properties of refractive error maps are investigated. Closed-form expressions are derived for the probability density function (PDF) and its statistical moments for the general case of rotationally-symmetric aberrations. A closed-form expression for a PDF for a general non-rotationally symmetric wavefront aberration is difficult to derive. However, for specific cases, such as astigmatism, a closed-form expression of the PDF can be obtained. Further, interpretation of the distribution of the refractive error map as well as its moments is provided for a range of wavefront aberrations measured in real eyes. These are evaluated using a kernel density and sample moments estimators. It is concluded that the refractive error domain allows non-functional analysis of wavefront aberrations based on simple statistics in the form of its sample moments. Clinicians may find this approach to wavefront analysis easier to interpret due to the clinical familiarity and intuitive appeal of refractive error maps.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Mainstream business process modelling techniques promote a design paradigm wherein the activities to be performed within a case, together with their usual execution order, form the backbone of a process model, on top of which other aspects are anchored. This paradigm, while eective in standardised and production-oriented domains, shows some limitations when confronted with processes where case-by-case variations and exceptions are the norm. In this thesis we develop the idea that the eective design of exible process models calls for an alternative modelling paradigm, one in which process models are modularised along key business objects, rather than along activity decompositions. The research follows a design science method, starting from the formulation of a research problem expressed in terms of requirements, and culminating in a set of artifacts that have been devised to satisfy these requirements. The main contributions of the thesis are: (i) a meta-model for object-centric process modelling incorporating constructs for capturing exible processes; (ii) a transformation from this meta-model to an existing activity-centric process modelling language, namely YAWL, showing the relation between object-centric and activity-centric process modelling approaches; and (iii) a Coloured Petri Net that captures the semantics of the proposed meta-model. The meta-model has been evaluated using a framework consisting of a set of work ow patterns. Moreover, the meta-model has been embodied in a modelling tool that has been used to capture two industrial scenarios.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The eyelids play an important role in lubricating and protecting the surface of the eye. Each blink serves to spread fresh tears, remove debris and replenish the smooth optical surface of the eye. Yet little is known about how the eyelids contact the ocular surface and what pressure distribution exists between the eyelids and cornea. As the principal refractive component of the eye, the cornea is a major element of the eye’s optics. The optical properties of the cornea are known to be susceptible to the pressure exerted by the eyelids. Abnormal eyelids, due to disease, have altered pressure on the ocular surface due to changes in the shape, thickness or position of the eyelids. Normal eyelids also cause corneal distortions that are most often noticed when they are resting closer to the corneal centre (for example during reading). There were many reports of monocular diplopia after reading due to corneal distortion, but prior to videokeratoscopes these localised changes could not be measured. This thesis has measured the influence of eyelid pressure on the cornea after short-term near tasks and techniques were developed to quantify eyelid pressure and its distribution. The profile of the wave-like eyelid-induced corneal changes and the refractive effects of these distortions were investigated. Corneal topography changes due to both the upper and lower eyelids were measured for four tasks involving two angles of vertical downward gaze (20° and 40°) and two near work tasks (reading and steady fixation). After examining the depth and shape of the corneal changes, conclusions were reached regarding the magnitude and distribution of upper and lower eyelid pressure for these task conditions. The degree of downward gaze appears to alter the upper eyelid pressure on the cornea, with deeper changes occurring after greater angles of downward gaze. Although the lower eyelid was further from the corneal centre in large angles of downward gaze, its effect on the cornea was greater than that of the upper eyelid. Eyelid tilt, curvature, and position were found to be influential in the magnitude of eyelid-induced corneal changes. Refractively these corneal changes are clinically and optically significant with mean spherical and astigmatic changes of about 0.25 D after only 15 minutes of downward gaze (40° reading and steady fixation conditions). Due to the magnitude of these changes, eyelid pressure in downward gaze offers a possible explanation for some of the day-to-day variation observed in refraction. Considering the magnitude of these changes and previous work on their regression, it is recommended that sustained tasks performed in downward gaze should be avoided for at least 30 minutes before corneal and refractive assessment requiring high accuracy. Novel procedures were developed to use a thin (0.17 mm) tactile piezoresistive pressure sensor mounted on a rigid contact lens to measure eyelid pressure. A hydrostatic calibration system was constructed to convert raw digital output of the sensors to actual pressure units. Conditioning the sensor prior to use regulated the measurement response and sensor output was found to stabilise about 10 seconds after loading. The influences of various external factors on sensor output were studied. While the sensor output drifted slightly over several hours, it was not significant over the measurement time of 30 seconds used for eyelid pressure, as long as the length of the calibration and measurement recordings were matched. The error associated with calibrating at room temperature but measuring at ocular surface temperature led to a very small overestimation of pressure. To optimally position the sensor-contact lens combination under the eyelid margin, an in vivo measurement apparatus was constructed. Using this system, eyelid pressure increases were observed when the upper eyelid was placed on the sensor and a significant increase was apparent when the eyelid pressure was increased by pulling the upper eyelid tighter against the eye. For a group of young adult subjects, upper eyelid pressure was measured using this piezoresistive sensor system. Three models of contact between the eyelid and ocular surface were used to calibrate the pressure readings. The first model assumed contact between the eyelid and pressure sensor over more than the pressure cell width of 1.14 mm. Using thin pressure sensitive carbon paper placed under the eyelid, a contact imprint was measured and this width used for the second model of contact. Lastly as Marx’s line has been implicated as the region of contact with the ocular surface, its width was measured and used as the region of contact for the third model. The mean eyelid pressures calculated using these three models for the group of young subjects were 3.8 ± 0.7 mmHg (whole cell), 8.0 ± 3.4 mmHg (imprint width) and 55 ± 26 mmHg (Marx’s line). The carbon imprints using Pressurex-micro confirmed previous suggestions that a band of the eyelid margin has primary contact with the ocular surface and provided the best estimate of the contact region and hence eyelid pressure. Although it is difficult to directly compare the results with previous eyelid pressure measurement attempts, the eyelid pressure calculated using this model was slightly higher than previous manometer measurements but showed good agreement with the eyelid force estimated using an eyelid tensiometer. The work described in this thesis has shown that the eyelids have a significant influence on corneal shape, even after short-term tasks (15 minutes). Instrumentation was developed using piezoresistive sensors to measure eyelid pressure. Measurements for the upper eyelid combined with estimates of the contact region between the cornea and the eyelid enabled quantification of the upper eyelid pressure for a group of young adult subjects. These techniques will allow further investigation of the interaction between the eyelids and the surface of the eye.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: SEQ Catchments Ltd and QUT are collaborating on groundwater investigations in the SE Qld region, which utilise community engagement and 3D Visualisation methodologies. The projects, which have been funded by the Australian Government’s NHT and Caring for our Country programmes, were initiated from local community concerns regarding groundwater sustainability and quality in areas where little was previously known. ----- Objectives: Engage local and regional stakeholders to tap all available sources of information;•Establish on-going (2 years +) community-based groundwater / surface water monitoring programmes;•Develop 3D Visualisation from all available data; and•Involve, train and inform the local community for improved on-ground land and water use management. ----- Results and findings: Respectful community engagement yielded information, access to numerous monitoring sites and education opportunities at low cost, which would otherwise be unavailable. A Framework for Community-Based Groundwater Monitoring has been documented (Todd, 2008).A 3D visualisation models have been developed for basaltic settings, which relate surface features familiar to the local community with the interpreted sub-surface hydrogeology. Groundwater surface movements have been animated and compared to local rainfall using the time-series monitoring data.An important 3D visualisation feature of particular interest to the community was the interaction between groundwater and surface water. This factor was crucial in raising awareness of potential impacts of land and water use on groundwater and surface water resources.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This article presents a survey of authorisation models and considers their ‘fitness-for-purpose’ in facilitating information sharing. Network-supported information sharing is an important technical capability that underpins collaboration in support of dynamic and unpredictable activities such as emergency response, national security, infrastructure protection, supply chain integration and emerging business models based on the concept of a ‘virtual organisation’. The article argues that present authorisation models are inflexible and poorly scalable in such dynamic environments due to their assumption that the future needs of the system can be predicted, which in turn justifies the use of persistent authorisation policies. The article outlines the motivation and requirement for a new flexible authorisation model that addresses the needs of information sharing. It proposes that a flexible and scalable authorisation model must allow an explicit specification of the objectives of the system and access decisions must be made based on a late trade-off analysis between these explicit objectives. A research agenda for the proposed Objective-based Access Control concept is presented.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Games and related virtual environments have been a much-hyped area of the entertainment industry. The classic quote is that games are now approaching the size of Hollywood box office sales [1]. Books are now appearing that talk up the influence of games on business [2], and it is one of the key drivers of present hardware development. Some of this 3D technology is now embedded right down at the operating system level via the Windows Presentation Foundations – hit Windows/Tab on your Vista box to find out... In addition to this continued growth in the area of games, there are a number of factors that impact its development in the business community. Firstly, the average age of gamers is approaching the mid thirties. Therefore, a number of people who are in management positions in large enterprises are experienced in using 3D entertainment environments. Secondly, due to the pressure of demand for more computational power in both CPU and Graphical Processing Units (GPUs), your average desktop, any decent laptop, can run a game or virtual environment. In fact, the demonstrations at the end of this paper were developed at the Queensland University of Technology (QUT) on a standard Software Operating Environment, with an Intel Dual Core CPU and basic Intel graphics option. What this means is that the potential exists for the easy uptake of such technology due to 1. a broad range of workers being regularly exposed to 3D virtual environment software via games; 2. present desktop computing power now strong enough to potentially roll out a virtual environment solution across an entire enterprise. We believe such visual simulation environments can have a great impact in the area of business process modeling. Accordingly, in this article we will outline the communication capabilities of such environments, giving fantastic possibilities for business process modeling applications, where enterprises need to create, manage, and improve their business processes, and then communicate their processes to stakeholders, both process and non-process cognizant. The article then concludes with a demonstration of the work we are doing in this area at QUT.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The performance of iris recognition systems is significantly affected by the segmentation accuracy, especially in non- ideal iris images. This paper proposes an improved method to localise non-circular iris images quickly and accurately. Shrinking and expanding active contour methods are consolidated when localising inner and outer iris boundaries. First, the pupil region is roughly estimated based on histogram thresholding and morphological operations. There- after, a shrinking active contour model is used to precisely locate the inner iris boundary. Finally, the estimated inner iris boundary is used as an initial contour for an expanding active contour scheme to find the outer iris boundary. The proposed scheme is robust in finding exact the iris boundaries of non-circular and off-angle irises. In addition, occlusions of the iris images from eyelids and eyelashes are automatically excluded from the detected iris region. Experimental results on CASIA v3.0 iris databases indicate the accuracy of proposed technique.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The term structure of interest rates is often summarized using a handful of yield factors that capture shifts in the shape of the yield curve. In this paper, we develop a comprehensive model for volatility dynamics in the level, slope, and curvature of the yield curve that simultaneously includes level and GARCH effects along with regime shifts. We show that the level of the short rate is useful in modeling the volatility of the three yield factors and that there are significant GARCH effects present even after including a level effect. Further, we find that allowing for regime shifts in the factor volatilities dramatically improves the model’s fit and strengthens the level effect. We also show that a regime-switching model with level and GARCH effects provides the best out-of-sample forecasting performance of yield volatility. We argue that the auxiliary models often used to estimate term structure models with simulation-based estimation techniques should be consistent with the main features of the yield curve that are identified by our model.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper firstly presents an extended ambiguity resolution model that deals with an ill-posed problem and constraints among the estimated parameters. In the extended model, the regularization criterion is used instead of the traditional least squares in order to estimate the float ambiguities better. The existing models can be derived from the general model. Secondly, the paper examines the existing ambiguity searching methods from four aspects: exclusion of nuisance integer candidates based on the available integer constraints; integer rounding; integer bootstrapping and integer least squares estimations. Finally, this paper systematically addresses the similarities and differences between the generalized TCAR and decorrelation methods from both theoretical and practical aspects.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The main objective of this PhD was to further develop Bayesian spatio-temporal models (specifically the Conditional Autoregressive (CAR) class of models), for the analysis of sparse disease outcomes such as birth defects. The motivation for the thesis arose from problems encountered when analyzing a large birth defect registry in New South Wales. The specific components and related research objectives of the thesis were developed from gaps in the literature on current formulations of the CAR model, and health service planning requirements. Data from a large probabilistically-linked database from 1990 to 2004, consisting of fields from two separate registries: the Birth Defect Registry (BDR) and Midwives Data Collection (MDC) were used in the analyses in this thesis. The main objective was split into smaller goals. The first goal was to determine how the specification of the neighbourhood weight matrix will affect the smoothing properties of the CAR model, and this is the focus of chapter 6. Secondly, I hoped to evaluate the usefulness of incorporating a zero-inflated Poisson (ZIP) component as well as a shared-component model in terms of modeling a sparse outcome, and this is carried out in chapter 7. The third goal was to identify optimal sampling and sample size schemes designed to select individual level data for a hybrid ecological spatial model, and this is done in chapter 8. Finally, I wanted to put together the earlier improvements to the CAR model, and along with demographic projections, provide forecasts for birth defects at the SLA level. Chapter 9 describes how this is done. For the first objective, I examined a series of neighbourhood weight matrices, and showed how smoothing the relative risk estimates according to similarity by an important covariate (i.e. maternal age) helped improve the model’s ability to recover the underlying risk, as compared to the traditional adjacency (specifically the Queen) method of applying weights. Next, to address the sparseness and excess zeros commonly encountered in the analysis of rare outcomes such as birth defects, I compared a few models, including an extension of the usual Poisson model to encompass excess zeros in the data. This was achieved via a mixture model, which also encompassed the shared component model to improve on the estimation of sparse counts through borrowing strength across a shared component (e.g. latent risk factor/s) with the referent outcome (caesarean section was used in this example). Using the Deviance Information Criteria (DIC), I showed how the proposed model performed better than the usual models, but only when both outcomes shared a strong spatial correlation. The next objective involved identifying the optimal sampling and sample size strategy for incorporating individual-level data with areal covariates in a hybrid study design. I performed extensive simulation studies, evaluating thirteen different sampling schemes along with variations in sample size. This was done in the context of an ecological regression model that incorporated spatial correlation in the outcomes, as well as accommodating both individual and areal measures of covariates. Using the Average Mean Squared Error (AMSE), I showed how a simple random sample of 20% of the SLAs, followed by selecting all cases in the SLAs chosen, along with an equal number of controls, provided the lowest AMSE. The final objective involved combining the improved spatio-temporal CAR model with population (i.e. women) forecasts, to provide 30-year annual estimates of birth defects at the Statistical Local Area (SLA) level in New South Wales, Australia. The projections were illustrated using sixteen different SLAs, representing the various areal measures of socio-economic status and remoteness. A sensitivity analysis of the assumptions used in the projection was also undertaken. By the end of the thesis, I will show how challenges in the spatial analysis of rare diseases such as birth defects can be addressed, by specifically formulating the neighbourhood weight matrix to smooth according to a key covariate (i.e. maternal age), incorporating a ZIP component to model excess zeros in outcomes and borrowing strength from a referent outcome (i.e. caesarean counts). An efficient strategy to sample individual-level data and sample size considerations for rare disease will also be presented. Finally, projections in birth defect categories at the SLA level will be made.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this thesis an investigation into theoretical models for formation and interaction of nanoparticles is presented. The work presented includes a literature review of current models followed by a series of five chapters of original research. This thesis has been submitted in partial fulfilment of the requirements for the degree of doctor of philosophy by publication and therefore each of the five chapters consist of a peer-reviewed journal article. The thesis is then concluded with a discussion of what has been achieved during the PhD candidature, the potential applications for this research and ways in which the research could be extended in the future. In this thesis we explore stochastic models pertaining to the interaction and evolution mechanisms of nanoparticles. In particular, we explore in depth the stochastic evaporation of molecules due to thermal activation and its ultimate effect on nanoparticles sizes and concentrations. Secondly, we analyse the thermal vibrations of nanoparticles suspended in a fluid and subject to standing oscillating drag forces (as would occur in a standing sound wave) and finally on lattice surfaces in the presence of high heat gradients. We have described in this thesis a number of new models for the description of multicompartment networks joined by a multiple, stochastically evaporating, links. The primary motivation for this work is in the description of thermal fragmentation in which multiple molecules holding parts of a carbonaceous nanoparticle may evaporate. Ultimately, these models predict the rate at which the network or aggregate fragments into smaller networks/aggregates and with what aggregate size distribution. The models are highly analytic and describe the fragmentation of a link holding multiple bonds using Markov processes that best describe different physical situations and these processes have been analysed using a number of mathematical methods. The fragmentation of the network/aggregate is then predicted using combinatorial arguments. Whilst there is some scepticism in the scientific community pertaining to the proposed mechanism of thermal fragmentation,we have presented compelling evidence in this thesis supporting the currently proposed mechanism and shown that our models can accurately match experimental results. This was achieved using a realistic simulation of the fragmentation of the fractal carbonaceous aggregate structure using our models. Furthermore, in this thesis a method of manipulation using acoustic standing waves is investigated. In our investigation we analysed the effect of frequency and particle size on the ability for the particle to be manipulated by means of a standing acoustic wave. In our results, we report the existence of a critical frequency for a particular particle size. This frequency is inversely proportional to the Stokes time of the particle in the fluid. We also find that for large frequencies the subtle Brownian motion of even larger particles plays a significant role in the efficacy of the manipulation. This is due to the decreasing size of the boundary layer between acoustic nodes. Our model utilises a multiple time scale approach to calculating the long term effects of the standing acoustic field on the particles that are interacting with the sound. These effects are then combined with the effects of Brownian motion in order to obtain a complete mathematical description of the particle dynamics in such acoustic fields. Finally, in this thesis, we develop a numerical routine for the description of "thermal tweezers". Currently, the technique of thermal tweezers is predominantly theoretical however there has been a handful of successful experiments which demonstrate the effect it practise. Thermal tweezers is the name given to the way in which particles can be easily manipulated on a lattice surface by careful selection of a heat distribution over the surface. Typically, the theoretical simulations of the effect can be rather time consuming with supercomputer facilities processing data over days or even weeks. Our alternative numerical method for the simulation of particle distributions pertaining to the thermal tweezers effect use the Fokker-Planck equation to derive a quick numerical method for the calculation of the effective diffusion constant as a result of the lattice and the temperature. We then use this diffusion constant and solve the diffusion equation numerically using the finite volume method. This saves the algorithm from calculating many individual particle trajectories since it is describes the flow of the probability distribution of particles in a continuous manner. The alternative method that is outlined in this thesis can produce a larger quantity of accurate results on a household PC in a matter of hours which is much better than was previously achieveable.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

While spatial determinants of emmetropization have been examined extensively in animal models and spatial processing of human myopes has also been studied, there have been few studies investigating temporal aspects of emmetropization and temporal processing in human myopia. The influence of temporal light modulation on eye growth and refractive compensation has been observed in animal models and there is evidence of temporal visual processing deficits in individuals with high myopia or other pathologies. Given this, the aims of this work were to examine the relationships between myopia (i.e. degree of myopia and progression status) and temporal visual performance and to consider any temporal processing deficits in terms of the parallel retinocortical pathways. Three psychophysical studies investigating temporal processing performance were conducted in young adult myopes and non-myopes: (1) backward visual masking, (2) dot motion perception and (3) phantom contour. For each experiment there were approximately 30 young emmetropes, 30 low myopes (myopia less than 5 D) and 30 high myopes (5 to 12 D). In the backward visual masking experiment, myopes were also classified according to their progression status (30 stable myopes and 30 progressing myopes). The first study was based on the observation that the visibility of a target is reduced by a second target, termed the mask, presented quickly after the first target. Myopes were more affected by the mask when the task was biased towards the magnocellular pathway; myopes had a 25% mean reduction in performance compared with emmetropes. However, there was no difference in the effect of the mask when the task was biased towards the parvocellular system. For all test conditions, there was no significant correlation between backward visual masking task performance and either the degree of myopia or myopia progression status. The dot motion perception study measured detection thresholds for the minimum displacement of moving dots, the maximum displacement of moving dots and degree of motion coherence required to correctly determine the direction of motion. The visual processing of these tasks is dominated by the magnocellular pathway. Compared with emmetropes, high myopes had reduced ability to detect the minimum displacement of moving dots for stimuli presented at the fovea (20% higher mean threshold) and possibly at the inferior nasal retina. The minimum displacement threshold was significantly and positively correlated to myopia magnitude and axial length, and significantly and negatively correlated with retinal thickness for the inferior nasal retina. The performance of emmetropes and myopes for all the other dot motion perception tasks were similar. In the phantom contour study, the highest temporal frequency of the flickering phantom pattern at which the contour was visible was determined. Myopes had significantly lower flicker detection limits (21.8 ± 7.1 Hz) than emmetropes (25.6 ± 8.8 Hz) for tasks biased towards the magnocellular pathway for both high (99%) and low (5%) contrast stimuli. There was no difference in flicker limits for a phantom contour task biased towards the parvocellular pathway. For all phantom contour tasks, there was no significant correlation between flicker detection thresholds and magnitude of myopia. Of the psychophysical temporal tasks studied here those primarily involving processing by the magnocellular pathway revealed differences in performance of the refractive error groups. While there are a number of interpretations for this data, this suggests that there may be a temporal processing deficit in some myopes that is selective for the magnocellular system. The minimum displacement dot motion perception task appears the most sensitive test, of those studied, for investigating changes in visual temporal processing in myopia. Data from the visual masking and phantom contour tasks suggest that the alterations to temporal processing occur at an early stage of myopia development. In addition, the link between increased minimum displacement threshold and decreasing retinal thickness suggests that there is a retinal component to the observed modifications in temporal processing.