972 resultados para Depth from focus
Resumo:
Graphics Processing Units have become a booster for the microelectronics industry. However, due to intellectual property issues, there is a serious lack of information on implementation details of the hardware architecture that is behind GPUs. For instance, the way texture is handled and decompressed in a GPU to reduce bandwidth usage has never been dealt with in depth from a hardware point of view. This work addresses a comparative study on the hardware implementation of different texture decompression algorithms for both conventional (PCs and video game consoles) and mobile platforms. Circuit synthesis is performed targeting both a reconfigurable hardware platform and a 90nm standard cell library. Area-delay trade-offs have been extensively analyzed, which allows us to compare the complexity of decompressors and thus determine suitability of algorithms for systems with limited hardware resources.
Resumo:
Nuestro cerebro contiene cerca de 1014 sinapsis neuronales. Esta enorme cantidad de conexiones proporciona un entorno ideal donde distintos grupos de neuronas se sincronizan transitoriamente para provocar la aparición de funciones cognitivas, como la percepción, el aprendizaje o el pensamiento. Comprender la organización de esta compleja red cerebral en base a datos neurofisiológicos, representa uno de los desafíos más importantes y emocionantes en el campo de la neurociencia. Se han propuesto recientemente varias medidas para evaluar cómo se comunican las diferentes partes del cerebro a diversas escalas (células individuales, columnas corticales, o áreas cerebrales). Podemos clasificarlos, según su simetría, en dos grupos: por una parte, la medidas simétricas, como la correlación, la coherencia o la sincronización de fase, que evalúan la conectividad funcional (FC); mientras que las medidas asimétricas, como la causalidad de Granger o transferencia de entropía, son capaces de detectar la dirección de la interacción, lo que denominamos conectividad efectiva (EC). En la neurociencia moderna ha aumentado el interés por el estudio de las redes funcionales cerebrales, en gran medida debido a la aparición de estos nuevos algoritmos que permiten analizar la interdependencia entre señales temporales, además de la emergente teoría de redes complejas y la introducción de técnicas novedosas, como la magnetoencefalografía (MEG), para registrar datos neurofisiológicos con gran resolución. Sin embargo, nos hallamos ante un campo novedoso que presenta aun varias cuestiones metodológicas sin resolver, algunas de las cuales trataran de abordarse en esta tesis. En primer lugar, el creciente número de aproximaciones para determinar la existencia de FC/EC entre dos o más señales temporales, junto con la complejidad matemática de las herramientas de análisis, hacen deseable organizarlas todas en un paquete software intuitivo y fácil de usar. Aquí presento HERMES (http://hermes.ctb.upm.es), una toolbox en MatlabR, diseñada precisamente con este fin. Creo que esta herramienta será de gran ayuda para todos aquellos investigadores que trabajen en el campo emergente del análisis de conectividad cerebral y supondrá un gran valor para la comunidad científica. La segunda cuestión practica que se aborda es el estudio de la sensibilidad a las fuentes cerebrales profundas a través de dos tipos de sensores MEG: gradiómetros planares y magnetómetros, esta aproximación además se combina con un enfoque metodológico, utilizando dos índices de sincronización de fase: phase locking value (PLV) y phase lag index (PLI), este ultimo menos sensible a efecto la conducción volumen. Por lo tanto, se compara su comportamiento al estudiar las redes cerebrales, obteniendo que magnetómetros y PLV presentan, respectivamente, redes más densamente conectadas que gradiómetros planares y PLI, por los valores artificiales que crea el problema de la conducción de volumen. Sin embargo, cuando se trata de caracterizar redes epilépticas, el PLV ofrece mejores resultados, debido a la gran dispersión de las redes obtenidas con PLI. El análisis de redes complejas ha proporcionado nuevos conceptos que mejoran caracterización de la interacción de sistemas dinámicos. Se considera que una red está compuesta por nodos, que simbolizan sistemas, cuyas interacciones se representan por enlaces, y su comportamiento y topología puede caracterizarse por un elevado número de medidas. Existe evidencia teórica y empírica de que muchas de ellas están fuertemente correlacionadas entre sí. Por lo tanto, se ha conseguido seleccionar un pequeño grupo que caracteriza eficazmente estas redes, y condensa la información redundante. Para el análisis de redes funcionales, la selección de un umbral adecuado para decidir si un determinado valor de conectividad de la matriz de FC es significativo y debe ser incluido para un análisis posterior, se convierte en un paso crucial. En esta tesis, se han obtenido resultados más precisos al utilizar un test de subrogadas, basado en los datos, para evaluar individualmente cada uno de los enlaces, que al establecer a priori un umbral fijo para la densidad de conexiones. Finalmente, todas estas cuestiones se han aplicado al estudio de la epilepsia, caso práctico en el que se analizan las redes funcionales MEG, en estado de reposo, de dos grupos de pacientes epilépticos (generalizada idiopática y focal frontal) en comparación con sujetos control sanos. La epilepsia es uno de los trastornos neurológicos más comunes, con más de 55 millones de afectados en el mundo. Esta enfermedad se caracteriza por la predisposición a generar ataques epilépticos de actividad neuronal anormal y excesiva o bien síncrona, y por tanto, es el escenario perfecto para este tipo de análisis al tiempo que presenta un gran interés tanto desde el punto de vista clínico como de investigación. Los resultados manifiestan alteraciones especificas en la conectividad y un cambio en la topología de las redes en cerebros epilépticos, desplazando la importancia del ‘foco’ a la ‘red’, enfoque que va adquiriendo relevancia en las investigaciones recientes sobre epilepsia. ABSTRACT There are about 1014 neuronal synapses in the human brain. This huge number of connections provides the substrate for neuronal ensembles to become transiently synchronized, producing the emergence of cognitive functions such as perception, learning or thinking. Understanding the complex brain network organization on the basis of neuroimaging data represents one of the most important and exciting challenges for systems neuroscience. Several measures have been recently proposed to evaluate at various scales (single cells, cortical columns, or brain areas) how the different parts of the brain communicate. We can classify them, according to their symmetry, into two groups: symmetric measures, such as correlation, coherence or phase synchronization indexes, evaluate functional connectivity (FC); and on the other hand, the asymmetric ones, such as Granger causality or transfer entropy, are able to detect effective connectivity (EC) revealing the direction of the interaction. In modern neurosciences, the interest in functional brain networks has increased strongly with the onset of new algorithms to study interdependence between time series, the advent of modern complex network theory and the introduction of powerful techniques to record neurophysiological data, such as magnetoencephalography (MEG). However, when analyzing neurophysiological data with this approach several questions arise. In this thesis, I intend to tackle some of the practical open problems in the field. First of all, the increase in the number of time series analysis algorithms to study brain FC/EC, along with their mathematical complexity, creates the necessity of arranging them into a single, unified toolbox that allow neuroscientists, neurophysiologists and researchers from related fields to easily access and make use of them. I developed such a toolbox for this aim, it is named HERMES (http://hermes.ctb.upm.es), and encompasses several of the most common indexes for the assessment of FC and EC running for MatlabR environment. I believe that this toolbox will be very helpful to all the researchers working in the emerging field of brain connectivity analysis and will entail a great value for the scientific community. The second important practical issue tackled in this thesis is the evaluation of the sensitivity to deep brain sources of two different MEG sensors: planar gradiometers and magnetometers, in combination with the related methodological approach, using two phase synchronization indexes: phase locking value (PLV) y phase lag index (PLI), the latter one being less sensitive to volume conduction effect. Thus, I compared their performance when studying brain networks, obtaining that magnetometer sensors and PLV presented higher artificial values as compared with planar gradiometers and PLI respectively. However, when it came to characterize epileptic networks it was the PLV which gives better results, as PLI FC networks where very sparse. Complex network analysis has provided new concepts which improved characterization of interacting dynamical systems. With this background, networks could be considered composed of nodes, symbolizing systems, whose interactions with each other are represented by edges. A growing number of network measures is been applied in network analysis. However, there is theoretical and empirical evidence that many of these indexes are strongly correlated with each other. Therefore, in this thesis I reduced them to a small set, which could more efficiently characterize networks. Within this framework, selecting an appropriate threshold to decide whether a certain connectivity value of the FC matrix is significant and should be included in the network analysis becomes a crucial step, in this thesis, I used the surrogate data tests to make an individual data-driven evaluation of each of the edges significance and confirmed more accurate results than when just setting to a fixed value the density of connections. All these methodologies were applied to the study of epilepsy, analysing resting state MEG functional networks, in two groups of epileptic patients (generalized and focal epilepsy) that were compared to matching control subjects. Epilepsy is one of the most common neurological disorders, with more than 55 million people affected worldwide, characterized by its predisposition to generate epileptic seizures of abnormal excessive or synchronous neuronal activity, and thus, this scenario and analysis, present a great interest from both the clinical and the research perspective. Results revealed specific disruptions in connectivity and network topology and evidenced that networks’ topology is changed in epileptic brains, supporting the shift from ‘focus’ to ‘networks’ which is gaining importance in modern epilepsy research.
Resumo:
The paper explores the spatial and social impacts arising from implementation of a road-pricing scheme in the Madrid Metropolitan Area (MMA). Our analytical focus is on understanding the effects of the scheme on the transport accessibility of different social groups within the MMA. We define an evaluation framework to appraise the accessibility of different districts within the MMA in terms of the actual and perceived cost of using the road infrastructure "before" and "after" the implementation of the scheme. The framework was developed using quantitative survey data and qualitative data from focus group discussions with residents. We then simulated user behaviors (mode and route choice) based on the empirical evidence from a travel demand model for the MMA. The results from our simulation model demonstrated that implementation of the toll on the orbital metropolitan motorways (M40, M30, for example) decreases accessibility, mostly in the districts where there are no viable public transport alternatives. Our key finding is that the economic burden of the road-pricing scheme particularly affects unskilled and lower income individuals living in the south of the MMA. Consequently lower income people reduce their use of tolled roads and have to find new arrangements for these trips: i.e. switch to the public transport, spend double the time for their commuter trips or stay at home. The results of our research could be applicable more widely for anyone wishing to better understand the important relationship between increased transport cost and social equity, especially where there is an intention to introduce similar road-pricing schemes within the urban context.
Resumo:
El presente estudio se realizó con el propósito de dar un entendimiento global de la situación actual en lo referente a la competitividad de Mazatlán como destino turístico; conocer las particularidades del destino permite aplicar procedimientos innovadores y participativos en la búsqueda de competitividad, lo que favorece un mejor desarrollo turístico. El principal valor de esta investigación radica en la necesidad de identificar las estrategias para lograr competitividad y convertir el turismo de sol y playa en una palanca de desarrollo económico y generadora de empleos. El estudio se realizó bajo la perspectiva metodológica cualitativa ejecutada a través de diferentes métodos; por un lado, el enfoque de triangulación implicó análisis documental para contextualizar la competitividad turística del destino y por otro lado, análisis de datos generados de las discusiones en grupos focales con los actores de la actividad turística del destino. Se revisaron los fundamentos teóricos de la competitividad sistémica, con referencia especial al Índice de Competitividad Sistémica de las Ciudades Mexicanas (ICSar-ciudades) únicamente para definir indicadores o categorías analíticas que fueron incluidos en la discusión (marco regulatorio, ambiente de negocios e infraestructura y recursos humanos, culturales y naturales). Entre los principales resultados, se destaca la necesidad empaquetar los atractivos turísticos del destino, lo cual permita constituir un producto turístico, tarifado y publicado como un elemento completo; la necesidad de contar con adecuados canales de distribución y comercialización; un adecuado sistema de movilidad en el destino que cubra las necesidades de los turistas; el fortalecimiento de la infraestructura turística para maximizar los flujos turísticos; colaboración de todos los niveles de gobierno e Instituciones, sectores públicos o privados en el diseño e implementación de planes de marketing.
Resumo:
We present sedimentary geochemical data and in situ benthic flux measurements of dissolved inorganic nitrogen (DIN: NO3-, NO2-, NH4+) and oxygen (O2) from 7 sites with variable sand content along 18°N offshore Mauritania (NW Africa). Bottom water O2 concentrations at the shallowest station were hypoxic (42 µM) and increased to 125 µM at the deepest site (1113 m). Total oxygen uptake rates were highest on the shelf (-10.3 mmol O2 /m2 d) and decreased quasi-exponentially with water depth to -3.2 mmol O2 /m2 d. Average denitrification rates estimated from a flux balance decreased with water depth from 2.2 to 0.2 mmol N /m2 d. Overall, the sediments acted as net sink for DIN. Observed increases in delta 15NNO3 and delta 18ONO3 in the benthic chamber deployed on the shelf, characterized by muddy sand, were used to calculate apparent benthic nitrate fractionation factors of 8.0 pro mille (15epsilon app) and 14.1 pro mille (18epsilon app). Measurements of delta 15NNO2 further demonstrated that the sediments acted as a source of 15N depleted NO2-. These observations were analyzed using an isotope box model that considered denitrification and nitrification of NH4+ and NO2-. The principal findings were that (i) net benthic 14N/15N fractionation (epsilon DEN) was 12.9 ± 1.7pro mille, (ii) inverse fractionation during nitrite oxidation leads to an efflux of isotopically light NO2- (-22 ± 1.9 pro mille), and (iii) direct coupling between nitrification and denitrification in the sediment is negligible. Previously reported epsilon DEN for fine-grained sediments are much lower (4-8 pro mille). We speculate that high benthic nitrate fractionation is driven by a combination of enhanced porewater-seawater exchange in permeable sediments and the hypoxic, high productivity environment. Although not without uncertainties, the results presented could have important implications for understanding the current state of the marine N cycle.
Resumo:
We present in situ microelectrode measurements of sediment formation factor and porewater oxygen and pH from six stations in the North Atlantic varying in depth from 2159 to 5380 m. A numerical model of the oxygen data indicates that fluxes of oxygen to the sediments are as much as an order of magnitude higher than benthic chamber flux measurements previously reported in the same area. Model results require dissolution driven by metabolic CO2 production within the sediments to explain the pH data; even at the station with the most undersaturated bottom waters >60% of the calcite dissolution occurs in response to metabolic CO2. Aragonite dissolution alone cannot provide the observed buffering of porewater pH, even at the shallowest station. A sensitivity test of the model that accounts for uncertainties in the bottom water saturation state and the stoichiometry between oxygen consumption and CO2 production during respiration constrains the dissolution rate constant for calcite to between 3 and 30% day**-1, in agreement with earlier in situ determinations of the rate constant. Model results predict that over 35% of the calcium carbonate rain to these sediments dissolves at all stations, confirmed by sediment trap and CaCO3 accumulation data.
Resumo:
Living and dead benthic Foraminifera of 26 sediment surface samples from the East Atlantic continental margin (off Portugal) are studied. The stations are located on two profiles off Cape Mondego and off Cape Sines, ranging in water depth from 45 to 3905 meters. The highest values of standing crop are on the shelf (200 m) (up to 420 specimens/10 cm**3). Below 1000 m water depth standing crop is low (5 -24 specimens/10 cm**3). 151species and species groups are distinguished. Most of the living species do occur in a wide depth range. Faunal depth boundaries are at 50/100m, at 600/800 m, and at 1000 m. Results published from the North Atlantic and the East Mediterranean do not differ from those obtained in samples off Portugal. Depth of water (e.g. hydrostatic pressure) or another factor being controlled by depth (e.g. limitation of food supply) seems to be the most important factor of the benthic foraminiferal distribution.
Resumo:
The principal gaseous carbon-containing components identified in the first 400 m of sediment at Deep Sea Drilling Project Site 533, Leg 76, are methane (CH4) and carbon dioxide (CO2). Below a sub-bottom depth of about 25 m, sediment cores commonly contained pockets caused by the expansion of gas upon core recovery. The carbon isotopic composition (d13C per mil relative to PDB standard) of CH4 and CO2 in these gas pockets has been measured, resulting in the following observations: (1) d13C-CH4 values increase with depth from approximately -94 per mil in the uppermost sediment to about -66 per mil in the deepest sediment, reflecting a systematic but nonlinear depletion of 12C with depth. (2) d13C-CO2 values also increase with depth of sediment from about -25 per mil to about -4 per mil, snowing a depletion of 12C that closely parallels the trend of the isotopic composition of CH4. The magnitude and parallel distribution of d13C values for both CH4 and CO2 are consistent with the concept that the formation of the CH4 resulted from the microbiological reduction of CO2 from organic substances. These results imply that CH4 and CO2 incorporated in gas hydrates at this site are biogenic.
Resumo:
The process of fluid release from the subducting slab beneath the Izu arc volcanic front (Izu VF) was examined by measuring B concentrations and B isotope ratios in the Neogene fallout tephra (ODP Site 782A). Both were measured by secondary ion mass spectrometry, in a subset of matrix glasses and glassy plagioclase-hosted melt inclusions selected from material previously analyzed for major and trace elements (glasses) and radiogenic isotopes (Sr, Nd, Pb; bulk tephra). These tephra glasses have high B abundances (~10-60 ppm) and heavy delta11B values (+4.5? to +12.0?), extending the previously reported range for Izu VF rocks (delta11B, +7.0? to +7.3?). The glasses show striking negative correlations of delta11B with large ion lithophile element (LILE)/Nb ratios. These correlations cannot be explained by mixing two separate slab fluids, originating from the subducting sediment and the subducting basaltic crust, respectively (model A). Two alternative models (models B and C) are proposed. Model B proposes that the inverse correlations are inherited from altered oceanic crust (AOC), which shows a systematic decrease of B and LILE with increasing depth (from basaltic layer 2A to layer 3), paralleled by an increase in delta11B (from ~ +1? to +10? to +24?). In this model, the contribution of sedimentary B is insignificant (<4% of B in the Izu VF rocks). Model C explains the correlation as a mixture of a low-delta11B (~ +1?) 'composite' slab fluid (a mixture of metasediment- and metabasalt-derived fluids) with a metasomatized mantle wedge containing elevated B (~1-2 ppm) and heavy delta11B (~ +14?). The mantle wedge was likely metasomatized by 11B-rich fluids beneath the outer forearc, and subsequently down dragged to arc front depths by the descending slab. Pb-B isotope systematics indicate that, at arc front depths, ~ 53% of the B in the Izu VF is derived from the wedge. This implies that the heavy delta11B values of Izu VF rocks are largely a result of fluid fractionation, and do not reflect variations in slab source provenance (i.e. subducting sediment vs. basaltic crust). Since the B content of the peridotite at the outer forearc (7-58 ppm B, mean 24 +/- 16 ppm) is much higher than beneath the arc front (~1-2 ppm B), the hydrated mantle wedge must have released a B-rich fluid on its downward path. This 'wedge flux' can explain (1) the across-arc decrease in B and delta11B (e.g. Izu, Kuriles), without requiring a progressive decrease in fluid flux from the subducting slab, and (2) the thermal structure of volcanic arcs, as reflected in the B and delta11B variations of volcanic arc rocks.
Resumo:
Typical size of bubbles obtained from cavitation inception pressure measured in the surface layer of the Atlantic Ocean in situ aboard R/V Professor Vize in 1971 and Nerey in 1973 are reported. These results do not contradict ones of bubble size measurements using optical or acoustical techniques. Variability of bubble size is discovered and described. This variability is related to passing from one geographical region to another (from 68°55'S to 61°52'N), to changes in depth (from 5 to 100 m) and in day time, as well as to spatial fluctuations within an aquatic area. It is suggested that, in addition to wave breaking, there is another source of bubbles at depth 10-20 m that associates with hydrobiological processes.
Resumo:
Subantarctic Macquarie Island has substantial areas of feldmark on its plateau above 200 m altitude. Samples of the substrate (5.5 cm in depth) from bare areas of feldmark contained viable propagules of bryophyte species found at adjacent and distant sites on the island. In laboratory conditions propagules of 15 bryophyte taxa germinated, allowing interpretation of reasons for bare patches in feldmark: bryophytes were successful at colonizing stable ground but when surface movement was present, burial and/or damage of propagules and young plants prevented colonization. Spherical moss polsters found in cryoturbatic areas in feldmark, however, represent a growth form that can tolerate surface movement. A conceptual model illustrating processes associated with colonization dynamics of bryophytes on feldmark terraces is presented. Ten of the 15 germinated taxa were nonlocal taxa which currently grow in plant communities at lower and hence warmer altitudes on Macquarie Island. The presence of viable propagules of these taxa provide an immediate and constant potential for dramatic vegetation change with climate change.
Resumo:
This paper complements the preceding one by Clarke et al, which looked at the long-term impact of retail restructuring on consumer choice at the local level. Whereas the previous paper was based on quantitative evidence from survey research, this paper draws on the qualitative phases of the same three-year study, and in it we aim to understand how the changing forms of retail provision are experienced at the neighbourhood and household level. The empirical material is drawn from focus groups, accompanied shopping trips, diaries, interviews, and kitchen visits with eight households in two contrasting neighbourhoods in the Portsmouth area. The data demonstrate that consumer choice involves judgments of taste, quality, and value as well as more ‘objective’ questions of convenience, price, and accessibility. These judgments are related to households’ differential levels of cultural capital and involve ethical and moral considerations as well as more mundane considerations of practical utility. Our evidence suggests that many of the terms that are conventionally advanced as explanations of consumer choice (such as ‘convenience’, ‘value’, and ‘habit’) have very different meanings according to different household circumstances. To understand these meanings requires us to relate consumers’ at-store behaviour to the domestic context in which their consumption choices are embedded. Bringing theories of practice to bear on the nature of consumer choice, our research demonstrates that consumer choice between stores can be understood in terms of accessibility and convenience, whereas choice within stores involves notions of value, price, and quality. We also demonstrate that choice between and within stores is strongly mediated by consumers’ household contexts, reflecting the extent to which shopping practices are embedded within consumers’ domestic routines and complex everyday lives. The paper concludes with a summary of the overall findings of the project, and with a discussion of the practical and theoretical implications of the study.
Resumo:
Purpose: To determine the most appropriate analysis technique for the differentiation of multifocal intraocular lens (MIOL) designs using defocus curve assessment of visual capability.Methods:Four groups of fifteen subjects were implanted bilaterally with either monofocal intraocular lenses, refractive MIOLs, diffractive MIOLs, or a combination of refractive and diffractive MIOLs. Defocus curves between -5.0D and +1.5D were evaluated using an absolute and relative depth-of-focus method, the direct comparison method and a new 'Area-of-focus' metric. The results were correlated with a subjective perception of near and intermediate vision. Results:Neither depth-of-focus method of analysis were sensitive enough to differentiate between MIOL groups (p>0.05). The direct comparison method indicated that the refractive MIOL group performed better at +1.00, -1.00 and -1.50 D and worse at -3.00, -3.50, -4.00 and -5.00D compared to the diffractive MIOL group (p
Resumo:
This article examines the negotiation of face in post observation feedback conferences on an initial teacher training programme. The conferences were held in groups with one trainer and up to four trainees and followed a set of generic norms. These norms include the right to offer advice and to criticise, speech acts which are often considered to be face threatening in more normal contexts. However, as the data analysis shows, participants also interact in ways that challenge the generic norms, some of which might be considered more conventionally face attacking. The article argues that face should be analysed at the level of interaction (Haugh and Bargiela-Chiappini, 2010) and that situated and contextual detail is relevant to its analysis. It suggests that linguistic ethnography, which 'marries' (Wetherell, 2007) linguistics and ethnography, provides a useful theoretical framework for doing so. To this end the study draws on real-life talk-in-interaction (from transcribed recordings), the participants' perspectives (from focus groups and interviews) and situated detail (from fieldnotes) to produce a contextualised and nuanced analysis. © 2011 Elsevier B.V.
Resumo:
The incipient phase of presbyopia represents a loss in accommodative amplitude of approximately 3 dioptres between the ages of 35 and 45 and is the prelude to the need for a reading addition. The need to maintain single binocular vision during this period requires re-calibration of the correspondence between accommodation and vergence response. No previous study has specifically attempted to correlate change in accommodative status with the profile of oculomotor responses occurring within the incipient phase of presbyopia. Measurements were made of the amplitude of accommodation, stimulus and response AC/A ratios, CA/C ratio, tonic accommodation, tonic vergence, proximal vergence, vergence adaptation and accommodative adaptation of 38 subjects. Twenty subjects were aged 35 to 45 years of age and 10 subjects were aged 20 to 30 years of age at the commencement of the study. The measurements were repeated at four-monthly intervals for a total of two years. The results of this study fail to support the Hess-Gullstrand theory of presbyopia with evidence that the effort to produce a unit change in accommodation increases with age. The data obtained has enabled the analysis of how each individual oculomotor function varies with the decline in amplitude of accommodation. MATLAB/SIMULINK software has been used to assist in the analysis and to allow the amendment of existing models to represent accurately the ageing oculomotor system. This study has proposed that with the decline in the amplitude of accommodation there is an increase in the accommodative convergence response per unit of accommodative response. To compensate for this increase, evidence has been found of a decrease in tonic vergence with age. If this decline in tonic vergence is not sufficient to counteract the increase in accommodative convergence, it is proposed that the near vision response is limited to the maximum vergence response that can be tolerated, with the resulting lower accommodative response being compensated for by an increase in the subjective depth-of-focus. When the blur due to the decrease in accommodative response can no longer be tolerated, the first reading addition will be required.