911 resultados para Traffic Pattern Analysis
Resumo:
Discrete, microscopic lesions are developed in the brain in a number of neurodegenerative diseases. These lesions may not be randomly distributed in the tissue but exhibit a spatial pattern, i.e., a departure from randomness towards regularlity or clustering. The spatial pattern of a lesion may reflect its development in relation to other brain lesions or to neuroanatomical structures. Hence, a study of spatial pattern may help to elucidate the pathogenesis of a lesion. A number of statistical methods can be used to study the spatial patterns of brain lesions. They range from simple tests of whether the distribution of a lesion departs from random to more complex methods which can detect clustering and the size, distribution and spacing of clusters. This paper reviews the uses and limitations of these methods as applied to neurodegenerative disorders, and in particular to senile plaque formation in Alzheimer's disease.
Resumo:
Analyzing geographical patterns by collocating events, objects or their attributes has a long history in surveillance and monitoring, and is particularly applied in environmental contexts, such as ecology or epidemiology. The identification of patterns or structures at some scales can be addressed using spatial statistics, particularly marked point processes methodologies. Classification and regression trees are also related to this goal of finding "patterns" by deducing the hierarchy of influence of variables on a dependent outcome. Such variable selection methods have been applied to spatial data, but, often without explicitly acknowledging the spatial dependence. Many methods routinely used in exploratory point pattern analysis are2nd-order statistics, used in a univariate context, though there is also a wide literature on modelling methods for multivariate point pattern processes. This paper proposes an exploratory approach for multivariate spatial data using higher-order statistics built from co-occurrences of events or marks given by the point processes. A spatial entropy measure, derived from these multinomial distributions of co-occurrences at a given order, constitutes the basis of the proposed exploratory methods. © 2010 Elsevier Ltd.
Resumo:
To determine the factors influencing the distribution of -amyloid (Abeta) deposits in Alzheimer's disease (AD), the spatial patterns of the diffuse, primitive, and classic A deposits were studied from the superior temporal gyrus (STG) to sector CA4 of the hippocampus in six sporadic cases of the disease. In cortical gyri and in the CA sectors of the hippocampus, the Abeta deposits were distributed either in clusters 200-6400 microm in diameter that were regularly distributed parallel to the tissue boundary or in larger clusters greater than 6400 microm in diameter. In some regions, smaller clusters of Abeta deposits were aggregated into larger 'superclusters'. In many cortical gyri, the density of Abeta deposits was positively correlated with distance below the gyral crest. In the majority of regions, clusters of the diffuse, primitive, and classic deposits were not spatially correlated with each other. In two cases, double immunolabelled to reveal the Abeta deposits and blood vessels, the classic Abeta deposits were clustered around the larger diameter vessels. These results suggest a complex pattern of Abeta deposition in the temporal lobe in sporadic AD. A regular distribution of Abeta deposit clusters may reflect the degeneration of specific cortico-cortical and cortico-hippocampal pathways and the influence of the cerebral blood vessels. Large-scale clustering may reflect the aggregation of deposits in the depths of the sulci and the coalescence of smaller clusters.
Resumo:
To determine the factors influencing the distribution of β-amyloid (Aβ) deposits in Alzheimer's disease (AD), the spatial patterns of the diffuse, primitive, and classic Aβ deposits were studied from the superior temporal gyrus (STG) to sector CA4 of the hippocampus in six sporadic cases of the disease. In cortical gyri and in the CA sectors of the hippocampus, the Aβ deposits were distributed either in clusters 200-6400 μm in diameter that were regularly distributed parallel to the tissue boundary or in larger clusters greater than 6400 μm in diameter. In some regions, smaller clusters of Aβ deposits were aggregated into larger 'superclusters'. In many cortical gyri, the density of Aβ deposits was positively correlated with distance below the gyral crest. In the majority of regions, clusters of the diffuse, primitive, and classic deposits were not spatially correlated with each other. In two cases, double immunolabelled to reveal the Aβ deposits and blood vessels, the classic Aβ deposits were clustered around the larger diameter vessels. These results suggest a complex pattern of Aβ deposition in the temporal lobe in sporadic AD. A regular distribution of Aβ deposit clusters may reflect the degeneration of specific cortico-cortical and cortico-hippocampal pathways and the influence of the cerebral blood vessels. Large-scale clustering may reflect the aggregation of deposits in the depths of the sulci and the coalescence of smaller clusters.
Resumo:
Our aim was to approach an important and well-investigable phenomenon – connected to a relatively simple but real field situation – in such a way, that the results of field observations could be directly comparable with the predictions of a simulation model-system which uses a simple mathematical apparatus and to simultaneously gain such a hypothesis-system, which creates the theoretical opportunity for a later experimental series of studies. As a phenomenon of the study, we chose the seasonal coenological changes of aquatic and semiaquatic Heteroptera community. Based on the observed data, we developed such an ecological model-system, which is suitable for generating realistic patterns highly resembling to the observed temporal patterns, and by the help of which predictions can be given to alternative situations of climatic circumstances not experienced before (e.g. climate changes), and furthermore; which can simulate experimental circumstances. The stable coenological state-plane, which was constructed based on the principle of indirect ordination is suitable for unified handling of data series of monitoring and simulation, and also fits for their comparison. On the state-plane, such deviations of empirical and model-generated data can be observed and analysed, which could otherwise remain hidden.
Resumo:
With advances in science and technology, computing and business intelligence (BI) systems are steadily becoming more complex with an increasing variety of heterogeneous software and hardware components. They are thus becoming progressively more difficult to monitor, manage and maintain. Traditional approaches to system management have largely relied on domain experts through a knowledge acquisition process that translates domain knowledge into operating rules and policies. It is widely acknowledged as a cumbersome, labor intensive, and error prone process, besides being difficult to keep up with the rapidly changing environments. In addition, many traditional business systems deliver primarily pre-defined historic metrics for a long-term strategic or mid-term tactical analysis, and lack the necessary flexibility to support evolving metrics or data collection for real-time operational analysis. There is thus a pressing need for automatic and efficient approaches to monitor and manage complex computing and BI systems. To realize the goal of autonomic management and enable self-management capabilities, we propose to mine system historical log data generated by computing and BI systems, and automatically extract actionable patterns from this data. This dissertation focuses on the development of different data mining techniques to extract actionable patterns from various types of log data in computing and BI systems. Four key problems—Log data categorization and event summarization, Leading indicator identification , Pattern prioritization by exploring the link structures , and Tensor model for three-way log data are studied. Case studies and comprehensive experiments on real application scenarios and datasets are conducted to show the effectiveness of our proposed approaches.
Resumo:
Protecting confidential information from improper disclosure is a fundamental security goal. While encryption and access control are important tools for ensuring confidentiality, they cannot prevent an authorized system from leaking confidential information to its publicly observable outputs, whether inadvertently or maliciously. Hence, secure information flow aims to provide end-to-end control of information flow. Unfortunately, the traditionally-adopted policy of noninterference, which forbids all improper leakage, is often too restrictive. Theories of quantitative information flow address this issue by quantifying the amount of confidential information leaked by a system, with the goal of showing that it is intuitively "small" enough to be tolerated. Given such a theory, it is crucial to develop automated techniques for calculating the leakage in a system. ^ This dissertation is concerned with program analysis for calculating the maximum leakage, or capacity, of confidential information in the context of deterministic systems and under three proposed entropy measures of information leakage: Shannon entropy leakage, min-entropy leakage, and g-leakage. In this context, it turns out that calculating the maximum leakage of a program reduces to counting the number of possible outputs that it can produce. ^ The new approach introduced in this dissertation is to determine two-bit patterns, the relationships among pairs of bits in the output; for instance we might determine that two bits must be unequal. By counting the number of solutions to the two-bit patterns, we obtain an upper bound on the number of possible outputs. Hence, the maximum leakage can be bounded. We first describe a straightforward computation of the two-bit patterns using an automated prover. We then show a more efficient implementation that uses an implication graph to represent the two- bit patterns. It efficiently constructs the graph through the use of an automated prover, random executions, STP counterexamples, and deductive closure. The effectiveness of our techniques, both in terms of efficiency and accuracy, is shown through a number of case studies found in recent literature. ^
Resumo:
Persistent daily congestion has been increasing in recent years, particularly along major corridors during selected periods in the mornings and evenings. On certain segments, these roadways are often at or near capacity. However, a conventional Predefined control strategy did not fit the demands that changed over time, making it necessary to implement the various dynamical lane management strategies discussed in this thesis. Those strategies include hard shoulder running, reversible HOV lanes, dynamic tolls and variable speed limit. A mesoscopic agent-based DTA model is used to simulate different strategies and scenarios. From the analyses, all strategies aim to mitigate congestion in terms of the average speed and average density. The largest improvement can be found in hard shoulder running and reversible HOV lanes while the other two provide more stable traffic. In terms of average speed and travel time, hard shoulder running is the most congested strategy for I-270 to help relieve the traffic pressure.
Resumo:
Code patterns, including programming patterns and design patterns, are good references for programming language feature improvement and software re-engineering. However, to our knowledge, no existing research has attempted to detect code patterns based on code clone detection technology. In this study, we build upon the previous work and propose to detect and analyze code patterns from a collection of open source projects using NiPAT technology. Because design patterns are most closely associated with object-oriented languages, we choose Java and Python projects to conduct our study. The tool we use for detecting patterns is NiPAT, a pattern detecting tool originally developed for the TXL programming language based on the NiCad clone detector. We extend NiPAT for the Java and Python programming languages. Then, we try to identify all the patterns from the pattern report and classify them into several different categories. In the end of the study, we analyze all the patterns and compare the differences between Java and Python patterns.
Resumo:
Nowadays, World Heritage Sites (WHS) have been facing new challenges, partially due to a different tourism consumption patterns. As it is highlighted in a considerable amount of studies, visits to these sites are almost justified by this prestigious classification and motivations are closely associated with their cultural aspects and quality of the overall environment (among others, Marujo et al, 2012). However, a diversity of tourists’ profiles have been underlined in the literature. Starting from the results obtained in a previous study about cultural tourists’ profile, conducted during the year 2009 in the city of Évora, Portugal, it is our intend to compare the results with a recent survey applied to the visitors of the same city. Recognition of Évora by UNESCO in 1986 as “World Heritage” has fostered not only the preservation of heritage but also the tourist promotion of the town. This study compares and examined tourists’ profile, regarding from the tourists’ expenditure patterns in Évora. A total of 450 surveys were distributed in 2009, and recently, in 2015, the same numbers of surveys were collected. Chi-squared Automatic Interaction Detection (CHAID) was applied to model consumer patterns of domestic and international visitors, based on socio demographic, trip characteristics, length of stay and the degree of satisfaction of pull factors. CHAID allowed find a population classification in groups that able to describe the dependent variable, average daily tourist expenditure. Results revealed different patterns of daily average expenditure amongst the years, 2009 and 2015, even if primarily results not revealed significant variations in socio-demographic and trip characteristics among the visitors’ core profile. Local authorities should be aware of this changing expensive behavior of cultural visitors and should formulate strategies accordingly. Policy and managerial recommendations are discussed.
Resumo:
This study investigates tourists’ expenditure patterns in the city of Évora, a world heritage site (WHS) classified by UNESCO. The use of chi-squared automatic interaction detection (CHAID) was chosen, allowing the identification of distinct segments based on expenditure patterns. Visitors’ expenditure patterns have proven to be a pertinent element for a broader understanding of visitors’ behaviour at cultural destinations. Visitors’ expenditure patterns were revealed to be increasing within years studied.
Resumo:
Ship tracking systems allow Maritime Organizations that are concerned with the Safety at Sea to obtain information on the current location and route of merchant vessels. Thanks to Space technology in recent years the geographical coverage of the ship tracking platforms has increased significantly, from radar based near-shore traffic monitoring towards a worldwide picture of the maritime traffic situation. The long-range tracking systems currently in operations allow the storage of ship position data over many years: a valuable source of knowledge about the shipping routes between different ocean regions. The outcome of this Master project is a software prototype for the estimation of the most operated shipping route between any two geographical locations. The analysis is based on the historical ship positions acquired with long-range tracking systems. The proposed approach makes use of a Genetic Algorithm applied on a training set of relevant ship positions extracted from the long-term storage tracking database of the European Maritime Safety Agency (EMSA). The analysis of some representative shipping routes is presented and the quality of the results and their operational applications are assessed by a Maritime Safety expert.
Resumo:
The last decade has witnessed a major shift towards the deployment of embedded applications on multi-core platforms. However, real-time applications have not been able to fully benefit from this transition, as the computational gains offered by multi-cores are often offset by performance degradation due to shared resources, such as main memory. To efficiently use multi-core platforms for real-time systems, it is hence essential to tightly bound the interference when accessing shared resources. Although there has been much recent work in this area, a remaining key problem is to address the diversity of memory arbiters in the analysis to make it applicable to a wide range of systems. This work handles diverse arbiters by proposing a general framework to compute the maximum interference caused by the shared memory bus and its impact on the execution time of the tasks running on the cores, considering different bus arbiters. Our novel approach clearly demarcates the arbiter-dependent and independent stages in the analysis of these upper bounds. The arbiter-dependent phase takes the arbiter and the task memory-traffic pattern as inputs and produces a model of the availability of the bus to a given task. Then, based on the availability of the bus, the arbiter-independent phase determines the worst-case request-release scenario that maximizes the interference experienced by the tasks due to the contention for the bus. We show that the framework addresses the diversity problem by applying it to a memory bus shared by a fixed-priority arbiter, a time-division multiplexing (TDM) arbiter, and an unspecified work-conserving arbiter using applications from the MediaBench test suite. We also experimentally evaluate the quality of the analysis by comparison with a state-of-the-art TDM analysis approach and consistently showing a considerable reduction in maximum interference.
Resumo:
Texas State Department of Highways and Public Transportation, Austin
Resumo:
In China in particular, large, planned special events (e.g., the Olympic Games, etc.) are viewed as great opportunities for economic development. Large numbers of visitors from other countries and provinces may be expected to attend such events, bringing in significant tourism dollars. However, as a direct result of such events, the transportation system is likely to face great challenges as travel demand increases beyond its original design capacity. Special events in central business districts (CBD) in particular will further exacerbate traffic congestion on surrounding freeway segments near event locations. To manage the transportation system, it is necessary to plan and prepare for such special events, which requires prediction of traffic conditions during the events. This dissertation presents a set of novel prototype models to forecast traffic volumes along freeway segments during special events. Almost all research to date has focused solely on traffic management techniques under special event conditions. These studies, at most, provided a qualitative analysis and there was a lack of an easy-to-implement method for quantitative analyses. This dissertation presents a systematic approach, based separately on univariate time series model with intervention analysis and multivariate time series model with intervention analysis for forecasting traffic volumes on freeway segments near an event location. A case study was carried out, which involved analyzing and modelling the historical time series data collected from loop-detector traffic monitoring stations on the Second and Third Ring Roads near Beijing Workers Stadium. The proposed time series models, with expected intervention, are found to provide reasonably accurate forecasts of traffic pattern changes efficiently. They may be used to support transportation planning and management for special events.