979 resultados para Spatial travel pattern
Resumo:
Remote sensing data is routinely used in ecology to investigate the relationship between landscape pattern as characterised by land use and land cover maps, and ecological processes. Multiple factors related to the representation of geographic phenomenon have been shown to affect characterisation of landscape pattern resulting in spatial uncertainty. This study investigated the effect of the interaction between landscape spatial pattern and geospatial processing methods statistically; unlike most papers which consider the effect of each factor in isolation only. This is important since data used to calculate landscape metrics typically undergo a series of data abstraction processing tasks and are rarely performed in isolation. The geospatial processing methods tested were the aggregation method and the choice of pixel size used to aggregate data. These were compared to two components of landscape pattern, spatial heterogeneity and the proportion of landcover class area. The interactions and their effect on the final landcover map were described using landscape metrics to measure landscape pattern and classification accuracy (response variables). All landscape metrics and classification accuracy were shown to be affected by both landscape pattern and by processing methods. Large variability in the response of those variables and interactions between the explanatory variables were observed. However, even though interactions occurred, this only affected the magnitude of the difference in landscape metric values. Thus, provided that the same processing methods are used, landscapes should retain their ranking when their landscape metrics are compared. For example, highly fragmented landscapes will always have larger values for the landscape metric "number of patches" than less fragmented landscapes. But the magnitude of difference between the landscapes may change and therefore absolute values of landscape metrics may need to be interpreted with caution. The explanatory variables which had the largest effects were spatial heterogeneity and pixel size. These explanatory variables tended to result in large main effects and large interactions. The high variability in the response variables and the interaction of the explanatory variables indicate it would be difficult to make generalisations about the impact of processing on landscape pattern as only two processing methods were tested and it is likely that untested processing methods will potentially result in even greater spatial uncertainty. © 2013 Elsevier B.V.
Resumo:
The processing conducted by the visual system requires the combination of signals that are detected at different locations in the visual field. The processes by which these signals are combined are explored here using psychophysical experiments and computer modelling. Most of the work presented in this thesis is concerned with the summation of contrast over space at detection threshold. Previous investigations of this sort have been confounded by the inhomogeneity in contrast sensitivity across the visual field. Experiments performed in this thesis find that the decline in log contrast sensitivity with eccentricity is bilinear, with an initial steep fall-off followed by a shallower decline. This decline is scale-invariant for spatial frequencies of 0.7 to 4 c/deg. A detailed map of the inhomogeneity is developed, and applied to area summation experiments both by incorporating it into models of the visual system and by using it to compensate stimuli in order to factor out the effects of the inhomogeneity. The results of these area summation experiments show that the summation of contrast over area is spatially extensive (occurring over 33 stimulus carrier cycles), and that summation behaviour is the same in the fovea, parafovea, and periphery. Summation occurs according to a fourth-root summation rule, consistent with a “noisy energy” model. This work is extended to investigate the visual deficit in amblyopia, finding that area summation is normal in amblyopic observers. Finally, the methods used to study the summation of threshold contrast over area are adapted to investigate the integration of coherent orientation signals in a texture. The results of this study are described by a two-stage model, with a mandatory local combination stage followed by flexible global pooling of these local outputs. In each study, the results suggest a more extensive combination of signals in vision than has been previously understood.
Resumo:
We summarize the various strands of research on peripheral vision and relate them to theories of form perception. After a historical overview, we describe quantifications of the cortical magnification hypothesis, including an extension of Schwartz's cortical mapping function. The merits of this concept are considered across a wide range of psychophysical tasks, followed by a discussion of its limitations and the need for non-spatial scaling. We also review the eccentricity dependence of other low-level functions including reaction time, temporal resolution, and spatial summation, as well as perimetric methods. A central topic is then the recognition of characters in peripheral vision, both at low and high levels of contrast, and the impact of surrounding contours known as crowding. We demonstrate how Bouma's law, specifying the critical distance for the onset of crowding, can be stated in terms of the retinocortical mapping. The recognition of more complex stimuli, like textures, faces, and scenes, reveals a substantial impact of mid-level vision and cognitive factors. We further consider eccentricity-dependent limitations of learning, both at the level of perceptual learning and pattern category learning. Generic limitations of extrafoveal vision are observed for the latter in categorization tasks involving multiple stimulus classes. Finally, models of peripheral form vision are discussed. We report that peripheral vision is limited with regard to pattern categorization by a distinctly lower representational complexity and processing speed. Taken together, the limitations of cognitive processing in peripheral vision appear to be as significant as those imposed on low-level functions and by way of crowding.
Resumo:
How are the image statistics of global image contrast computed? We answered this by using a contrast-matching task for checkerboard configurations of ‘battenberg’ micro-patterns where the contrasts and spatial spreads of interdigitated pairs of micro-patterns were adjusted independently. Test stimuli were 20 × 20 arrays with various sized cluster widths, matched to standard patterns of uniform contrast. When one of the test patterns contained a pattern with much higher contrast than the other, that determined global pattern contrast, as in a max() operation. Crucially, however, the full matching functions had a curious intermediate region where low contrast additions for one pattern to intermediate contrasts of the other caused a paradoxical reduction in perceived global contrast. None of the following models predicted this: RMS, energy, linear sum, max, Legge and Foley. However, a gain control model incorporating wide-field integration and suppression of nonlinear contrast responses predicted the results with no free parameters. This model was derived from experiments on summation of contrast at threshold, and masking and summation effects in dipper functions. Those experiments were also inconsistent with the failed models above. Thus, we conclude that our contrast gain control model (Meese & Summers, 2007) describes a fundamental operation in human contrast vision.
Resumo:
The tauopathies are a major molecular group of neurodegenerative disorders characterised by the deposition of abnormal cellular aggregates of the microtubule associated protein (MAP) tau in the form of neuronal cytoplasmic inclusions (NCI). Recent research suggests that cell to cell propagation of pathogenic tau may be involved in the neurodegeneration of these disorders. If pathogenic tau spreads along anatomical pathways it may give rise to specific spatial patterns of the NCI in brain tissue. To test this hypothesis, the spatial patterns of NCI in cerebral cortical regions were compared in tissue sections taken from five major tauopathies: (1) argyrophilic grain disease (AGD), (2) Alzheimer's disease (AD), (3) corticobasal degeneration (CBD), (4) Pick's disease (PiD), and (5) progressive supranuclear palsy (PSP). In the cerebral cortex of these disorders, NCI were frequently aggregated into clusters and the clusters were regularly distributed parallel to the pia mater. In a significant proportion of regions, the mean size of the regularly distributed clusters of NCI was in the range 400 – 800 m, measured parallel to the pia mater, approximating to the dimension of cell columns associated with the cortico-cortical anatomical pathways. Hence, the data suggest that cortical NCI in the tauopathies exhibit a spatial pattern in the cortex which could result from the spread of pathogenic tau along anatomical pathways. Treatments designed to protect the cortex from tau propagation may therefore be applicable across several different disorders within this molecular group.
Resumo:
An Automatic Vehicle Location (AVL) system is a computer-based vehicle tracking system that is capable of determining a vehicle's location in real time. As a major technology of the Advanced Public Transportation System (APTS), AVL systems have been widely deployed by transit agencies for purposes such as real-time operation monitoring, computer-aided dispatching, and arrival time prediction. AVL systems make a large amount of transit performance data available that are valuable for transit performance management and planning purposes. However, the difficulties of extracting useful information from the huge spatial-temporal database have hindered off-line applications of the AVL data. ^ In this study, a data mining process, including data integration, cluster analysis, and multiple regression, is proposed. The AVL-generated data are first integrated into a Geographic Information System (GIS) platform. The model-based cluster method is employed to investigate the spatial and temporal patterns of transit travel speeds, which may be easily translated into travel time. The transit speed variations along the route segments are identified. Transit service periods such as morning peak, mid-day, afternoon peak, and evening periods are determined based on analyses of transit travel speed variations for different times of day. The seasonal patterns of transit performance are investigated by using the analysis of variance (ANOVA). Travel speed models based on the clustered time-of-day intervals are developed using important factors identified as having significant effects on speed for different time-of-day periods. ^ It has been found that transit performance varied from different seasons and different time-of-day periods. The geographic location of a transit route segment also plays a role in the variation of the transit performance. The results of this research indicate that advanced data mining techniques have good potential in providing automated techniques of assisting transit agencies in service planning, scheduling, and operations control. ^
Resumo:
This dissertation aimed to improve travel time estimation for the purpose of transportation planning by developing a travel time estimation method that incorporates the effects of signal timing plans, which were difficult to consider in planning models. For this purpose, an analytical model has been developed. The model parameters were calibrated based on data from CORSIM microscopic simulation, with signal timing plans optimized using the TRANSYT-7F software. Independent variables in the model are link length, free-flow speed, and traffic volumes from the competing turning movements. The developed model has three advantages compared to traditional link-based or node-based models. First, the model considers the influence of signal timing plans for a variety of traffic volume combinations without requiring signal timing information as input. Second, the model describes the non-uniform spatial distribution of delay along a link, this being able to estimate the impacts of queues at different upstream locations of an intersection and attribute delays to a subject link and upstream link. Third, the model shows promise of improving the accuracy of travel time prediction. The mean absolute percentage error (MAPE) of the model is 13% for a set of field data from Minnesota Department of Transportation (MDOT); this is close to the MAPE of uniform delay in the HCM 2000 method (11%). The HCM is the industrial accepted analytical model in the existing literature, but it requires signal timing information as input for calculating delays. The developed model also outperforms the HCM 2000 method for a set of Miami-Dade County data that represent congested traffic conditions, with a MAPE of 29%, compared to 31% of the HCM 2000 method. The advantages of the proposed model make it feasible for application to a large network without the burden of signal timing input, while improving the accuracy of travel time estimation. An assignment model with the developed travel time estimation method has been implemented in a South Florida planning model, which improved assignment results.
Resumo:
Annual Average Daily Traffic (AADT) is a critical input to many transportation analyses. By definition, AADT is the average 24-hour volume at a highway location over a full year. Traditionally, AADT is estimated using a mix of permanent and temporary traffic counts. Because field collection of traffic counts is expensive, it is usually done for only the major roads, thus leaving most of the local roads without any AADT information. However, AADTs are needed for local roads for many applications. For example, AADTs are used by state Departments of Transportation (DOTs) to calculate the crash rates of all local roads in order to identify the top five percent of hazardous locations for annual reporting to the U.S. DOT. ^ This dissertation develops a new method for estimating AADTs for local roads using travel demand modeling. A major component of the new method involves a parcel-level trip generation model that estimates the trips generated by each parcel. The model uses the tax parcel data together with the trip generation rates and equations provided by the ITE Trip Generation Report. The generated trips are then distributed to existing traffic count sites using a parcel-level trip distribution gravity model. The all-or-nothing assignment method is then used to assign the trips onto the roadway network to estimate the final AADTs. The entire process was implemented in the Cube demand modeling system with extensive spatial data processing using ArcGIS. ^ To evaluate the performance of the new method, data from several study areas in Broward County in Florida were used. The estimated AADTs were compared with those from two existing methods using actual traffic counts as the ground truths. The results show that the new method performs better than both existing methods. One limitation with the new method is that it relies on Cube which limits the number of zones to 32,000. Accordingly, a study area exceeding this limit must be partitioned into smaller areas. Because AADT estimates for roads near the boundary areas were found to be less accurate, further research could examine the best way to partition a study area to minimize the impact.^
Resumo:
We address the relative importance of nutrient availability in relation to other physical and biological factors in determining plant community assemblages around Everglades Tree Islands (Everglades National Park, Florida, USA). We carried out a one-time survey of elevation, soil, water level and vegetation structure and composition at 138 plots located along transects in three tree islands in the Park’s major drainage basin. We used an RDA variance partitioning technique to assess the relative importance of nutrient availability (soil N and P) and other factors in explaining herb and tree assemblages of tree island tail and surrounded marshes. The upland areas of the tree islands accumulate P and show low N concentration, producing a strong island-wide gradient in soil N:P ratio. While soil N:P ratio plays a significant role in determining herb layer and tree layer community assemblage in tree island tails, nevertheless part of its variance is shared with hydrology. The total species variance explained by the predictors is very low. We define a strong gradient in nutrient availability (soil N:P ratio) closely related to hydrology. Hydrology and nutrient availability are both factors influencing community assemblages around tree islands, nevertheless both seem to be acting together and in a complex mechanism. Future research should be focused on segregating these two factors in order to determine whether nutrient leaching from tree islands is a factor determining community assemblages and local landscape pattern in the Everglades, and how this process might be affected by water management.
Resumo:
In the current managed Everglades system, the pre-drainage, patterned mosaic of sawgrass ridges, sloughs and tree islands has been substantially altered or reduced largely as a result of human alterations to historic ecological and hydrological processes that sustained landscape patterns. The pre-compartmentalization ridge and slough landscape was a mosaic of sloughs, elongated sawgrass ridges (50-200m wide), and tree islands. The ridges and sloughs and tree islands were elongated in the direction of the water flow, with roughly equal area of ridge and slough. Over the past decades, the ridge-slough topographic relief and spatial patterning have degraded in many areas of the Everglades. Nutrient enriched areas have become dominated by Typha with little topographic relief; areas of reduced flow have lost the elongated ridge-slough topography; and ponded areas with excessively long hydroperiods have experienced a decline in ridge prevalence and shape, and in the number of tree islands (Sklar et al. 2004, Ogden 2005).
Resumo:
Hydrogeologic variables controlling groundwater exchange with inflow and flow-through lakes were simulated using a three-dimensional numerical model (MODFLOW) to investigate and quantify spatial patterns of lake bed seepage and hydraulic head distributions in the porous medium surrounding the lakes. Also, the total annual inflow and outflow were calculated as a percentage of lake volume for flow-through lake simulations. The general exponential decline of seepage rates with distance offshore was best demonstrated at lower anisotropy ratio (i.e., Kh/Kv = 1, 10), with increasing deviation from the exponential pattern as anisotropy was increased to 100 and 1000. 2-D vertical section models constructed for comparison with 3-D models showed that groundwater heads and seepages were higher in 3-D simulations. Addition of low conductivity lake sediments decreased seepage rates nearshore and increased seepage rates offshore in inflow lakes, and increased the area of groundwater inseepage on the beds of flow-through lakes. Introduction of heterogeneity into the medium decreased the water table and seepage ratesnearshore, and increased seepage rates offshore in inflow lakes. A laterally restricted aquifer located at the downgradient side of the flow-through lake increased the area of outseepage. Recharge rate, lake depth and lake bed slope had relatively little effect on the spatial patterns of seepage rates and groundwater exchange with lakes.
Resumo:
El Niño and the Southern Oscillation (ENSO) is a cycle that is initiated in the equatorial Pacific Ocean and is recognized on interannual timescales by oscillating patterns in tropical Pacific sea surface temperatures (SST) and atmospheric circulations. Using correlation and regression analysis of datasets that include SST’s and other interdependent variables including precipitation, surface winds, sea level pressure, this research seeks to quantify recent changes in ENSO behavior. Specifically, the amplitude, frequency of occurrence, and spatial characteristics (i.e. events with maximum amplitude in the Central Pacific versus the Eastern Pacific) are investigated. The research is based on the question; “Are the statistics of ENSO changing due to increasing greenhouse gas concentrations?” Our hypothesis is that the present-day changes in amplitude, frequency, and spatial characteristics of ENSO are determined by the natural variability of the ocean-atmosphere climate system, not the observed changes in the radiative forcing due to change in the concentrations of greenhouse gases. Statistical analysis, including correlation and regression analysis, is performed on observational ocean and atmospheric datasets available from the National Oceanographic and Atmospheric Administration (NOAA), National Center for Atmospheric Research (NCAR) and coupled model simulations from the Coupled Model Inter-comparison Project (phase 5, CMIP5). Datasets are analyzed with a particular focus on ENSO over the last thirty years. Understanding the observed changes in the ENSO phenomenon over recent decades has a worldwide significance. ENSO is the largest climate signal on timescales of 2 - 7 years and affects billions of people via atmospheric teleconnections that originate in the tropical Pacific. These teleconnections explain why changes in ENSO can lead to climate variations in areas including North and South America, Asia, and Australia. For the United States, El Niño events are linked to decreased number of hurricanes in the Atlantic basin, reduction in precipitation in the Pacific Northwest, and increased precipitation throughout the southern United Stated during winter months. Understanding variability in the amplitude, frequency, and spatial characteristics of ENSO is crucial for decision makers who must adapt where regional ecology and agriculture are affected by ENSO.
Resumo:
El Niño and the Southern Oscillation (ENSO) is a cycle that is initiated in the equatorial Pacific Ocean and is recognized on interannual timescales by oscillating patterns in tropical Pacific sea surface temperatures (SST) and atmospheric circulations. Using correlation and regression analysis of datasets that include SST’s and other interdependent variables including precipitation, surface winds, sea level pressure, this research seeks to quantify recent changes in ENSO behavior. Specifically, the amplitude, frequency of occurrence, and spatial characteristics (i.e. events with maximum amplitude in the Central Pacific versus the Eastern Pacific) are investigated. The research is based on the question; “Are the statistics of ENSO changing due to increasing greenhouse gas concentrations?” Our hypothesis is that the present-day changes in amplitude, frequency, and spatial characteristics of ENSO are determined by the natural variability of the ocean-atmosphere climate system, not the observed changes in the radiative forcing due to change in the concentrations of greenhouse gases. Statistical analysis, including correlation and regression analysis, is performed on observational ocean and atmospheric datasets available from the National Oceanographic and Atmospheric Administration (NOAA), National Center for Atmospheric Research (NCAR) and coupled model simulations from the Coupled Model Inter-comparison Project (phase 5, CMIP5). Datasets are analyzed with a particular focus on ENSO over the last thirty years. Understanding the observed changes in the ENSO phenomenon over recent decades has a worldwide significance. ENSO is the largest climate signal on timescales of 2 - 7 years and affects billions of people via atmospheric teleconnections that originate in the tropical Pacific. These teleconnections explain why changes in ENSO can lead to climate variations in areas including North and South America, Asia, and Australia. For the United States, El Niño events are linked to decreased number of hurricanes in the Atlantic basin, reduction in precipitation in the Pacific Northwest, and increased precipitation throughout the southern United Stated during winter months. Understanding variability in the amplitude, frequency, and spatial characteristics of ENSO is crucial for decision makers who must adapt where regional ecology and agriculture are affected by ENSO.
Resumo:
Acknowledgments Alexander Dürre was supported in part by the Collaborative Research Grant 823 of the German Research Foundation. David E. Tyler was supported in part by the National Science Foundation grant DMS-1407751. A visit of Daniel Vogel to David E. Tyler was supported by a travel grant from the Scottish Universities Physics Alliance. The authors are grateful to the editors and referees for their constructive comments.
Resumo:
The nonlinear interaction between light and atoms is an extensive field of study with a broad range of applications in quantum information science and condensed matter physics. Nonlinear optical phenomena occurring in cold atoms are particularly interesting because such slowly moving atoms can spatially organize into density gratings, which allows for studies involving optical interactions with structured materials. In this thesis, I describe a novel nonlinear optical effect that arises when cold atoms spatially bunch in an optical lattice. I show that employing this spatial atomic bunching provides access to a unique physical regime with reduced thresholds for nonlinear optical processes and enhanced material properties. Using this method, I observe the nonlinear optical phenomenon of transverse optical pattern formation at record-low powers. These transverse optical patterns are generated by a wave- mixing process that is mediated by the cold atomic vapor. The optical patterns are highly multimode and induce rich non-equilibrium atomic dynamics. In particular, I find that there exists a synergistic interplay between the generated optical pat- terns and the atoms, wherein the scattered fields help the atoms to self-organize into new, multimode structures that are not externally imposed on the atomic sample. These self-organized structures in turn enhance the power in the optical patterns. I provide the first detailed investigation of the motional dynamics of atoms that have self-organized in a multimode geometry. I also show that the transverse optical patterns induce Sisyphus cooling in all three spatial dimensions, which is the first observation of spontaneous three-dimensional cooling. My experiment represents a unique means by which to study nonlinear optics and non-equilibrium dynamics at ultra-low required powers.