925 resultados para stochastic particle system


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Surveillance networks are typically monitored by a few people, viewing several monitors displaying the camera feeds. It is then very difficult for a human operator to effectively detect events as they happen. Recently, computer vision research has begun to address ways to automatically process some of this data, to assist human operators. Object tracking, event recognition, crowd analysis and human identification at a distance are being pursued as a means to aid human operators and improve the security of areas such as transport hubs. The task of object tracking is key to the effective use of more advanced technologies. To recognize an event people and objects must be tracked. Tracking also enhances the performance of tasks such as crowd analysis or human identification. Before an object can be tracked, it must be detected. Motion segmentation techniques, widely employed in tracking systems, produce a binary image in which objects can be located. However, these techniques are prone to errors caused by shadows and lighting changes. Detection routines often fail, either due to erroneous motion caused by noise and lighting effects, or due to the detection routines being unable to split occluded regions into their component objects. Particle filters can be used as a self contained tracking system, and make it unnecessary for the task of detection to be carried out separately except for an initial (often manual) detection to initialise the filter. Particle filters use one or more extracted features to evaluate the likelihood of an object existing at a given point each frame. Such systems however do not easily allow for multiple objects to be tracked robustly, and do not explicitly maintain the identity of tracked objects. This dissertation investigates improvements to the performance of object tracking algorithms through improved motion segmentation and the use of a particle filter. A novel hybrid motion segmentation / optical flow algorithm, capable of simultaneously extracting multiple layers of foreground and optical flow in surveillance video frames is proposed. The algorithm is shown to perform well in the presence of adverse lighting conditions, and the optical flow is capable of extracting a moving object. The proposed algorithm is integrated within a tracking system and evaluated using the ETISEO (Evaluation du Traitement et de lInterpretation de Sequences vidEO - Evaluation for video understanding) database, and significant improvement in detection and tracking performance is demonstrated when compared to a baseline system. A Scalable Condensation Filter (SCF), a particle filter designed to work within an existing tracking system, is also developed. The creation and deletion of modes and maintenance of identity is handled by the underlying tracking system; and the tracking system is able to benefit from the improved performance in uncertain conditions arising from occlusion and noise provided by a particle filter. The system is evaluated using the ETISEO database. The dissertation then investigates fusion schemes for multi-spectral tracking systems. Four fusion schemes for combining a thermal and visual colour modality are evaluated using the OTCBVS (Object Tracking and Classification in and Beyond the Visible Spectrum) database. It is shown that a middle fusion scheme yields the best results and demonstrates a significant improvement in performance when compared to a system using either mode individually. Findings from the thesis contribute to improve the performance of semi-automated video processing and therefore improve security in areas under surveillance.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Objective: The emergency medical system (EMS) can be defined as a comprehensive, coordinated and integrated system of care for patients suffering acute illness and injury. The aim of the present paper is to describe the evolution of the Queensland Emergency Medical System (QEMS) and to recommend a strategic national approach to EMS development. Methods: Following the formation of the Queensland Ambulance Service in 1991, a state EMS committee was formed. This committee led the development and approval of the cross portfolio QEMS policy framework that has resulted in dynamic policy development, system monitoring and evaluation. This framework is led by the Queensland Emergency Medical Services Advisory Committee. Results: There has been considerable progress in the development of all aspects of the EMS in Queensland. These developments have derived from the improved coordination and leadership that QEMS provides and has resulted in widespread satisfaction by both patients and stakeholders. Conclusions: The strategic approach outlined in the present paper offers a model for EMS arrangements throughout Australia. We propose that the Council of Australian Governments should require each state and Territory to maintain an EMS committee. These state EMS committees should have a broad portfolio of responsibilities. They should provide leadership and direction to the development of the EMS and ensure coordination and quality of outcomes. A national EMS committee with broad representation and broad scope should be established to coordinate the national development of Australia's EMS.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This study considers the solution of a class of linear systems related with the fractional Poisson equation (FPE) (−∇2)α/2φ=g(x,y) with nonhomogeneous boundary conditions on a bounded domain. A numerical approximation to FPE is derived using a matrix representation of the Laplacian to generate a linear system of equations with its matrix A raised to the fractional power α/2. The solution of the linear system then requires the action of the matrix function f(A)=A−α/2 on a vector b. For large, sparse, and symmetric positive definite matrices, the Lanczos approximation generates f(A)b≈β0Vmf(Tm)e1. This method works well when both the analytic grade of A with respect to b and the residual for the linear system are sufficiently small. Memory constraints often require restarting the Lanczos decomposition; however this is not straightforward in the context of matrix function approximation. In this paper, we use the idea of thick-restart and adaptive preconditioning for solving linear systems to improve convergence of the Lanczos approximation. We give an error bound for the new method and illustrate its role in solving FPE. Numerical results are provided to gauge the performance of the proposed method relative to exact analytic solutions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Particle emissions, volatility, and the concentration of reactive oxygen species (ROS) were investigated for a pre-Euro I compression ignition engine to study the potential health impacts of employing ethanol fumigation technology. Engine testing was performed in two separate experimental campaigns with most testing performed at intermediate speed with four different load settings and various ethanol substitutions. A scanning mobility particle sizer (SMPS) was used to determine particle size distributions, a volatilization tandem differential mobility analyzer (V-TDMA) was used to explore particle volatility, and a new profluorescent nitroxide probe, BPEAnit, was used to investigate the potential toxicity of particles. The greatest particulate mass reduction was achieved with ethanol fumigation at full load, which contributed to the formation of a nucleation mode. Ethanol fumigation increased the volatility of particles by coating the particles with organic material or by making extra organic material available as an external mixture. In addition, the particle-related ROS concentrations increased with ethanol fumigation and were associated with the formation of a nucleation mode. The smaller particles, the increased volatility, and the increase in potential particle toxicity with ethanol fumigation may provide a substantial barrier for the uptake of fumigation technology using ethanol as a supplementary fuel.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The broad definition of sustainable development at the early stage of its introduction has caused confusion and hesitation among local authorities and planning professionals. The main difficulties are experience in employing loosely-defined principles of sustainable development in setting policies and goals. The question of how this theory/rhetoric-practice gap could be filled will be the theme of this study. One of the widely employed sustainability accounting approaches by governmental organisations, triple bottom line, and applicability of this approach to sustainable urban development policies will be examined. When incorporating triple bottom line considerations with the environmental impact assessment techniques, the framework of GIS-based decision support system that helps decision-makers in selecting policy option according to the economic, environmental and social impacts will be introduced. In order to embrace sustainable urban development policy considerations, the relationship between urban form, travel pattern and socio-economic attributes should be clarified. This clarification associated with other input decision support systems will picture the holistic state of the urban settings in terms of sustainability. In this study, grid-based indexing methodology will be employed to visualise the degree of compatibility of selected scenarios with the designated sustainable urban future. In addition, this tool will provide valuable knowledge about the spatial dimension of the sustainable development. It will also give fine details about the possible impacts of urban development proposals by employing disaggregated spatial data analysis (e.g. land-use, transportation, urban services, population density, pollution, etc.). The visualisation capacity of this tool will help decision makers and other stakeholders compare and select alternative of future urban developments.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The selection criteria for contractor pre-qualification are characterized by the co-existence of both quantitative and qualitative data. The qualitative data is non-linear, uncertain and imprecise. An ideal decision support system for contractor pre-qualification should have the ability of handling both quantitative and qualitative data, and of mapping the complicated nonlinear relationship of the selection criteria, such that rational and consistent decisions can be made. In this research paper, an artificial neural network model was developed to assist public clients identifying suitable contractors for tendering. The pre-qualification criteria (variables) were identified for the model. One hundred and twelve real pre-qualification cases were collected from civil engineering projects in Hong Kong, and eighty-eight hypothetical pre-qualification cases were also generated according to the “If-then” rules used by professionals in the pre-qualification process. The results of the analysis totally comply with current practice (public developers in Hong Kong). Each pre-qualification case consisted of input ratings for candidate contractors’ attributes and their corresponding pre-qualification decisions. The training of the neural network model was accomplished by using the developed program, in which a conjugate gradient descent algorithm was incorporated for improving the learning performance of the network. Cross-validation was applied to estimate the generalization errors based on the “re-sampling” of training pairs. The case studies show that the artificial neural network model is suitable for mapping the complicated nonlinear relationship between contractors’ attributes and their corresponding pre-qualification (disqualification) decisions. The artificial neural network model can be concluded as an ideal alternative for performing the contractor pre-qualification task.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Network-based Intrusion Detection Systems (NIDSs) analyse network traffic to detect instances of malicious activity. Typically, this is only possible when the network traffic is accessible for analysis. With the growing use of Virtual Private Networks (VPNs) that encrypt network traffic, the NIDS can no longer access this crucial audit data. In this paper, we present an implementation and evaluation of our approach proposed in Goh et al. (2009). It is based on Shamir's secret-sharing scheme and allows a NIDS to function normally in a VPN without any modifications and without compromising the confidentiality afforded by the VPN.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The two longitudinal case studies that make up this dissertation sought to explain and predict the relationship between usability and clinician acceptance of a health information system. The overall aim of the research study was to determine what role usability plays in the acceptance or rejection of systems used by clinicians in a healthcare context. The focus was on the end users (the clinicians) rather than the views of the system designers and managers responsible for implementation and the clients of the clinicians. A mixed methods approach was adopted that drew on both qualitative and quantitative research methods. This study followed the implementation of a community health information system from early beginnings to its established practice. Users were drawn from different health service departments with distinctly different organisational cultures and attitudes to information and communication technology used in this context. This study provided evidence that a usability analysis in this context would not necessarily be valid when the users have prior reservations on acceptance. Investigation was made on the initial training and post-implementation support together with a study on the nature of the clinicians to determine factors that may influence their attitude. This research identified that acceptance of a system is not necessarily a measure of its quality, capability and usability, is influenced by the user’s attitude which is determined by outside factors, and the nature and quality of training. The need to recognise the limitations of the current methodologies for analysing usability and acceptance was explored to lay the foundations for further research.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents a reliability-based reconfiguration methodology for power distribution systems. Probabilistic reliability models of the system components are considered and Monte Carlo method is used while evaluating the reliability of the distribution system. The reconfiguration is aimed at maximizing the reliability of the power supplied to the customers. A binary particle swarm optimization (BPSO) algorithm is used as a tool to determine the optimal configuration of the sectionalizing and tie switches in the system. The proposed methodology is applied on a modified IEEE 13-bus distribution system.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The World Wide Web has become a medium for people to share information. People use Web-based collaborative tools such as question answering (QA) portals, blogs/forums, email and instant messaging to acquire information and to form online-based communities. In an online QA portal, a user asks a question and other users can provide answers based on their knowledge, with the question usually being answered by many users. It can become overwhelming and/or time/resource consuming for a user to read all of the answers provided for a given question. Thus, there exists a need for a mechanism to rank the provided answers so users can focus on only reading good quality answers. The majority of online QA systems use user feedback to rank users’ answers and the user who asked the question can decide on the best answer. Other users who didn’t participate in answering the question can also vote to determine the best answer. However, ranking the best answer via this collaborative method is time consuming and requires an ongoing continuous involvement of users to provide the needed feedback. The objective of this research is to discover a way to recommend the best answer as part of a ranked list of answers for a posted question automatically, without the need for user feedback. The proposed approach combines both a non-content-based reputation method and a content-based method to solve the problem of recommending the best answer to the user who posted the question. The non-content method assigns a score to each user which reflects the users’ reputation level in using the QA portal system. Each user is assigned two types of non-content-based reputations cores: a local reputation score and a global reputation score. The local reputation score plays an important role in deciding the reputation level of a user for the category in which the question is asked. The global reputation score indicates the prestige of a user across all of the categories in the QA system. Due to the possibility of user cheating, such as awarding the best answer to a friend regardless of the answer quality, a content-based method for determining the quality of a given answer is proposed, alongside the non-content-based reputation method. Answers for a question from different users are compared with an ideal (or expert) answer using traditional Information Retrieval and Natural Language Processing techniques. Each answer provided for a question is assigned a content score according to how well it matched the ideal answer. To evaluate the performance of the proposed methods, each recommended best answer is compared with the best answer determined by one of the most popular link analysis methods, Hyperlink-Induced Topic Search (HITS). The proposed methods are able to yield high accuracy, as shown by correlation scores: Kendall correlation and Spearman correlation. The reputation method outperforms the HITS method in terms of recommending the best answer. The inclusion of the reputation score with the content score improves the overall performance, which is measured through the use of Top-n match scores.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Financial processes may possess long memory and their probability densities may display heavy tails. Many models have been developed to deal with this tail behaviour, which reflects the jumps in the sample paths. On the other hand, the presence of long memory, which contradicts the efficient market hypothesis, is still an issue for further debates. These difficulties present challenges with the problems of memory detection and modelling the co-presence of long memory and heavy tails. This PhD project aims to respond to these challenges. The first part aims to detect memory in a large number of financial time series on stock prices and exchange rates using their scaling properties. Since financial time series often exhibit stochastic trends, a common form of nonstationarity, strong trends in the data can lead to false detection of memory. We will take advantage of a technique known as multifractal detrended fluctuation analysis (MF-DFA) that can systematically eliminate trends of different orders. This method is based on the identification of scaling of the q-th-order moments and is a generalisation of the standard detrended fluctuation analysis (DFA) which uses only the second moment; that is, q = 2. We also consider the rescaled range R/S analysis and the periodogram method to detect memory in financial time series and compare their results with the MF-DFA. An interesting finding is that short memory is detected for stock prices of the American Stock Exchange (AMEX) and long memory is found present in the time series of two exchange rates, namely the French franc and the Deutsche mark. Electricity price series of the five states of Australia are also found to possess long memory. For these electricity price series, heavy tails are also pronounced in their probability densities. The second part of the thesis develops models to represent short-memory and longmemory financial processes as detected in Part I. These models take the form of continuous-time AR(∞) -type equations whose kernel is the Laplace transform of a finite Borel measure. By imposing appropriate conditions on this measure, short memory or long memory in the dynamics of the solution will result. A specific form of the models, which has a good MA(∞) -type representation, is presented for the short memory case. Parameter estimation of this type of models is performed via least squares, and the models are applied to the stock prices in the AMEX, which have been established in Part I to possess short memory. By selecting the kernel in the continuous-time AR(∞) -type equations to have the form of Riemann-Liouville fractional derivative, we obtain a fractional stochastic differential equation driven by Brownian motion. This type of equations is used to represent financial processes with long memory, whose dynamics is described by the fractional derivative in the equation. These models are estimated via quasi-likelihood, namely via a continuoustime version of the Gauss-Whittle method. The models are applied to the exchange rates and the electricity prices of Part I with the aim of confirming their possible long-range dependence established by MF-DFA. The third part of the thesis provides an application of the results established in Parts I and II to characterise and classify financial markets. We will pay attention to the New York Stock Exchange (NYSE), the American Stock Exchange (AMEX), the NASDAQ Stock Exchange (NASDAQ) and the Toronto Stock Exchange (TSX). The parameters from MF-DFA and those of the short-memory AR(∞) -type models will be employed in this classification. We propose the Fisher discriminant algorithm to find a classifier in the two and three-dimensional spaces of data sets and then provide cross-validation to verify discriminant accuracies. This classification is useful for understanding and predicting the behaviour of different processes within the same market. The fourth part of the thesis investigates the heavy-tailed behaviour of financial processes which may also possess long memory. We consider fractional stochastic differential equations driven by stable noise to model financial processes such as electricity prices. The long memory of electricity prices is represented by a fractional derivative, while the stable noise input models their non-Gaussianity via the tails of their probability density. A method using the empirical densities and MF-DFA will be provided to estimate all the parameters of the model and simulate sample paths of the equation. The method is then applied to analyse daily spot prices for five states of Australia. Comparison with the results obtained from the R/S analysis, periodogram method and MF-DFA are provided. The results from fractional SDEs agree with those from MF-DFA, which are based on multifractal scaling, while those from the periodograms, which are based on the second order, seem to underestimate the long memory dynamics of the process. This highlights the need and usefulness of fractal methods in modelling non-Gaussian financial processes with long memory.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Motor vehicles are a major source of gaseous and particulate matter pollution in urban areas, particularly of ultrafine sized particles (diameters < 0.1 µm). Exposure to particulate matter has been found to be associated with serious health effects, including respiratory and cardiovascular disease, and mortality. Particle emissions generated by motor vehicles span a very broad size range (from around 0.003-10 µm) and are measured as different subsets of particle mass concentrations or particle number count. However, there exist scientific challenges in analysing and interpreting the large data sets on motor vehicle emission factors, and no understanding is available of the application of different particle metrics as a basis for air quality regulation. To date a comprehensive inventory covering the broad size range of particles emitted by motor vehicles, and which includes particle number, does not exist anywhere in the world. This thesis covers research related to four important and interrelated aspects pertaining to particulate matter generated by motor vehicle fleets. These include the derivation of suitable particle emission factors for use in transport modelling and health impact assessments; quantification of motor vehicle particle emission inventories; investigation of the particle characteristic modality within particle size distributions as a potential for developing air quality regulation; and review and synthesis of current knowledge on ultrafine particles as it relates to motor vehicles; and the application of these aspects to the quantification, control and management of motor vehicle particle emissions. In order to quantify emissions in terms of a comprehensive inventory, which covers the full size range of particles emitted by motor vehicle fleets, it was necessary to derive a suitable set of particle emission factors for different vehicle and road type combinations for particle number, particle volume, PM1, PM2.5 and PM1 (mass concentration of particles with aerodynamic diameters < 1 µm, < 2.5 µm and < 10 µm respectively). The very large data set of emission factors analysed in this study were sourced from measurement studies conducted in developed countries, and hence the derived set of emission factors are suitable for preparing inventories in other urban regions of the developed world. These emission factors are particularly useful for regions with a lack of measurement data to derive emission factors, or where experimental data are available but are of insufficient scope. The comprehensive particle emissions inventory presented in this thesis is the first published inventory of tailpipe particle emissions prepared for a motor vehicle fleet, and included the quantification of particle emissions covering the full size range of particles emitted by vehicles, based on measurement data. The inventory quantified particle emissions measured in terms of particle number and different particle mass size fractions. It was developed for the urban South-East Queensland fleet in Australia, and included testing the particle emission implications of future scenarios for different passenger and freight travel demand. The thesis also presents evidence of the usefulness of examining modality within particle size distributions as a basis for developing air quality regulations; and finds evidence to support the relevance of introducing a new PM1 mass ambient air quality standard for the majority of environments worldwide. The study found that a combination of PM1 and PM10 standards are likely to be a more discerning and suitable set of ambient air quality standards for controlling particles emitted from combustion and mechanically-generated sources, such as motor vehicles, than the current mass standards of PM2.5 and PM10. The study also reviewed and synthesized existing knowledge on ultrafine particles, with a specific focus on those originating from motor vehicles. It found that motor vehicles are significant contributors to both air pollution and ultrafine particles in urban areas, and that a standardized measurement procedure is not currently available for ultrafine particles. The review found discrepancies exist between outcomes of instrumentation used to measure ultrafine particles; that few data is available on ultrafine particle chemistry and composition, long term monitoring; characterization of their spatial and temporal distribution in urban areas; and that no inventories for particle number are available for motor vehicle fleets. This knowledge is critical for epidemiological studies and exposure-response assessment. Conclusions from this review included the recommendation that ultrafine particles in populated urban areas be considered a likely target for future air quality regulation based on particle number, due to their potential impacts on the environment. The research in this PhD thesis successfully integrated the elements needed to quantify and manage motor vehicle fleet emissions, and its novelty relates to the combining of expertise from two distinctly separate disciplines - from aerosol science and transport modelling. The new knowledge and concepts developed in this PhD research provide never before available data and methods which can be used to develop comprehensive, size-resolved inventories of motor vehicle particle emissions, and air quality regulations to control particle emissions to protect the health and well-being of current and future generations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Credentials are a salient form of cultural capital and if a student’s learning and productions are not assessed, they are invisible in current social systems of education and employment. In this field, invisible equals non-existent. This paper arises from the context of an alternative education institution where conventional educational assessment techniques currently fail to recognise the creativity and skills of a cohort of marginalised young people. In order to facilitate a new assessment model an electronic portfolio system (EPS) is being developed and trialled to capture evidence of students’ learning and their productions. In so doing a dynamic system of arranging, exhibiting, exploiting and disseminating assessment data in the form of coherent, meaningful and valuable reports will be maintained. The paper investigates the notion of assessing development of creative thinking and skills through the means of a computerised system that operates in an area described as the efield. A model of the efield is delineated and is explained as a zone existing within the internet where free users exploit the cloud and cultivate social and cultural capital. Drawing largely on sociocultural theory and Bourdieu’s concepts of field, habitus and capitals, the article positions the efield as a potentially productive instrument in assessment for learning practices. An important aspect of the dynamics of this instrument is the recognition of teachers as learners. This is seen as an integral factor in the sociocultural approach to assessment for learning practices that will be deployed with the EPS. What actually takes place is argued to be assessment for learning as a field of exchange. The model produced in this research is aimed at delivering visibility and recognition through an engaging instrument that will enhance the prospects of marginalised young people and shift the paradigm for assessment in a creative world.