188 resultados para Multi-dimensional scaling
Resumo:
Sustainable urban development, a major issue at global scale, will become more relevant according to population growth predictions in developed and developing countries. Societal and international recognition of sustainability concerns led to the development of specific tools and procedures, known as sustainability assessments/appraisals (SA). Their effectiveness however, considering that global quality life indicators have worsened since their introduction, has promoted a re-thinking of SA instruments. More precisely, Strategic Environmental Assessment (SEA), – a tool introduced in the European context to evaluate policies, plans, and programmes (PPPs), – is being reconsidered because of several features that seem to limit its effectiveness. Over time, SEA has evolved in response to external and internal factors dealing with technical, procedural, planning and governance systems thus involving a shift of paradigm from EIA-based SEAs (first generation protocols) towards more integrated approaches (second generation ones). Changes affecting SEA are formalised through legislation in each Member State, to guide institutions at regional and local level. Defining SEA effectiveness is quite difficult. Its’ capacity-building process appears quite far from its conclusion, even if any definitive version can be conceptualized. In this paper, we consider some European nations with different planning systems and SA traditions. After the identification of some analytical criteria, a multi-dimensional cluster analysis is developed on some case studies, to outline current weaknesses.
Resumo:
Video game play is a popular entertainment choice, yet we have a limited understanding of the potential wellbeing benefits associated with recreational play. An online survey (final sample, n = 297) addresses this by investigating how the player experience related to wellbeing. The impact of amount of play, game genre, mode of play (social or solitary play) and the psychological experience of play (flow and need satisfaction) on a multi-dimensional measure of wellbeing (emotional, psychological and social) was examined via hierarchical regression. Age, gender, the play of casual games compared to shooters, and in-game experiences of flow, autonomy and relatedness were associated with increases in dimensions of wellbeing.
Resumo:
Background Project archives are becoming increasingly large and complex. On construction projects in particular, the increasing amount of information and the increasing complexity of its structure make searching and exploring information in the project archive challenging and time-consuming. Methods This research investigates a query-driven approach that represents new forms of contextual information to help users understand the set of documents resulting from queries of construction project archives. Specifically, this research extends query-driven interface research by representing three types of contextual information: (1) the temporal context is represented in the form of a timeline to show when each document was created; (2) the search-relevance context shows exactly which of the entered keywords matched each document; and (3) the usage context shows which project participants have accessed or modified a file. Results We implemented and tested these ideas within a prototype query-driven interface we call VisArchive. VisArchive employs a combination of multi-scale and multi-dimensional timelines, color-coded stacked bar charts, additional supporting visual cues and filters to support searching and exploring historical project archives. The timeline-based interface integrates three interactive timelines as focus + context visualizations. Conclusions The feasibility of using these visual design principles is tested in two types of project archives: searching construction project archives of an educational building project and tracking of software defects in the Mozilla Thunderbird project. These case studies demonstrate the applicability, usefulness and generality of the design principles implemented.
Resumo:
Mobile applications are being increasingly deployed on a massive scale in various mobile sensor grid database systems. With limited resources from the mobile devices, how to process the huge number of queries from mobile users with distributed sensor grid databases becomes a critical problem for such mobile systems. While the fundamental semantic cache technique has been investigated for query optimization in sensor grid database systems, the problem is still difficult due to the fact that more realistic multi-dimensional constraints have not been considered in existing methods. To solve the problem, a new semantic cache scheme is presented in this paper for location-dependent data queries in distributed sensor grid database systems. It considers multi-dimensional constraints or factors in a unified cost model architecture, determines the parameters of the cost model in the scheme by using the concept of Nash equilibrium from game theory, and makes semantic cache decisions from the established cost model. The scenarios of three factors of semantic, time and locations are investigated as special cases, which improve existing methods. Experiments are conducted to demonstrate the semantic cache scheme presented in this paper for distributed sensor grid database systems.
Resumo:
This paper presents Multi-Step A* (MSA*), a search algorithm based on A* for multi-objective 4D vehicle motion planning (three spatial and one time dimension). The research is principally motivated by the need for offline and online motion planning for autonomous Unmanned Aerial Vehicles (UAVs). For UAVs operating in large, dynamic and uncertain 4D environments, the motion plan consists of a sequence of connected linear tracks (or trajectory segments). The track angle and velocity are important parameters that are often restricted by assumptions and grid geometry in conventional motion planners. Many existing planners also fail to incorporate multiple decision criteria and constraints such as wind, fuel, dynamic obstacles and the rules of the air. It is shown that MSA* finds a cost optimal solution using variable length, angle and velocity trajectory segments. These segments are approximated with a grid based cell sequence that provides an inherent tolerance to uncertainty. Computational efficiency is achieved by using variable successor operators to create a multi-resolution, memory efficient lattice sampling structure. Simulation studies on the UAV flight planning problem show that MSA* meets the time constraints of online replanning and finds paths of equivalent cost but in a quarter of the time (on average) of vector neighbourhood based A*.
Resumo:
Finding an appropriate linking method to connect different dimensional element types in a single finite element model is a key issue in the multi-scale modeling. This paper presents a mixed dimensional coupling method using multi-point constraint equations derived by equating the work done on either side of interface connecting beam elements and shell elements for constructing a finite element multiscale model. A typical steel truss frame structure is selected as case example and the reduced scale specimen of this truss section is then studied in the laboratory to measure its dynamic and static behavior in global truss and local welded details while the different analytical models are developed for numerical simulation. Comparison of dynamic and static response of the calculated results among different numerical models as well as the good agreement with those from experimental results indicates that the proposed multi-scale model is efficient and accurate.
Resumo:
The purpose of this paper is to introduce the concept of hydraulic damage and its numerical integration. Unlike the common phenomenological continuum damage mechanics approaches, the procedure introduced in this paper relies on mature concepts of homogenization, linear fracture mechanics, and thermodynamics. The model is applied to the problem of fault reactivation within resource reservoirs. The results show that propagation of weaknesses is highly driven by the contrasts of properties in porous media. In particular, it is affected by the fracture toughness of host rocks. Hydraulic damage is diffused when it takes place within extended geological units and localized at interfaces and faults.
Resumo:
The identification of attractors is one of the key tasks in studies of neurobiological coordination from a dynamical systems perspective, with a considerable body of literature resulting from this task. However, with regards to typical movement models investigated, the overwhelming majority of actions studied previously belong to the class of continuous, rhythmical movements. In contrast, very few studies have investigated coordination of discrete movements, particularly multi-articular discrete movements. In the present study, we investigated phase transition behavior in a basketball throwing task where participants were instructed to shoot at the basket from different distances. Adopting the ubiquitous scaling paradigm, throwing distance was manipulated as a candidate control parameter. Using a cluster analysis approach, clear phase transitions between different movement patterns were observed in performance of only two of eight participants. The remaining participants used a single movement pattern and varied it according to throwing distance, thereby exhibiting hysteresis effects. Results suggested that, in movement models involving many biomechanical degrees of freedom in degenerate systems, greater movement variation across individuals is available for exploitation. This observation stands in contrast to movement variation typically observed in studies using more constrained bi-manual movement models. This degenerate system behavior provides new insights and poses fresh challenges to the dynamical systems theoretical approach, requiring further research beyond conventional movement models.
Resumo:
Financial processes may possess long memory and their probability densities may display heavy tails. Many models have been developed to deal with this tail behaviour, which reflects the jumps in the sample paths. On the other hand, the presence of long memory, which contradicts the efficient market hypothesis, is still an issue for further debates. These difficulties present challenges with the problems of memory detection and modelling the co-presence of long memory and heavy tails. This PhD project aims to respond to these challenges. The first part aims to detect memory in a large number of financial time series on stock prices and exchange rates using their scaling properties. Since financial time series often exhibit stochastic trends, a common form of nonstationarity, strong trends in the data can lead to false detection of memory. We will take advantage of a technique known as multifractal detrended fluctuation analysis (MF-DFA) that can systematically eliminate trends of different orders. This method is based on the identification of scaling of the q-th-order moments and is a generalisation of the standard detrended fluctuation analysis (DFA) which uses only the second moment; that is, q = 2. We also consider the rescaled range R/S analysis and the periodogram method to detect memory in financial time series and compare their results with the MF-DFA. An interesting finding is that short memory is detected for stock prices of the American Stock Exchange (AMEX) and long memory is found present in the time series of two exchange rates, namely the French franc and the Deutsche mark. Electricity price series of the five states of Australia are also found to possess long memory. For these electricity price series, heavy tails are also pronounced in their probability densities. The second part of the thesis develops models to represent short-memory and longmemory financial processes as detected in Part I. These models take the form of continuous-time AR(∞) -type equations whose kernel is the Laplace transform of a finite Borel measure. By imposing appropriate conditions on this measure, short memory or long memory in the dynamics of the solution will result. A specific form of the models, which has a good MA(∞) -type representation, is presented for the short memory case. Parameter estimation of this type of models is performed via least squares, and the models are applied to the stock prices in the AMEX, which have been established in Part I to possess short memory. By selecting the kernel in the continuous-time AR(∞) -type equations to have the form of Riemann-Liouville fractional derivative, we obtain a fractional stochastic differential equation driven by Brownian motion. This type of equations is used to represent financial processes with long memory, whose dynamics is described by the fractional derivative in the equation. These models are estimated via quasi-likelihood, namely via a continuoustime version of the Gauss-Whittle method. The models are applied to the exchange rates and the electricity prices of Part I with the aim of confirming their possible long-range dependence established by MF-DFA. The third part of the thesis provides an application of the results established in Parts I and II to characterise and classify financial markets. We will pay attention to the New York Stock Exchange (NYSE), the American Stock Exchange (AMEX), the NASDAQ Stock Exchange (NASDAQ) and the Toronto Stock Exchange (TSX). The parameters from MF-DFA and those of the short-memory AR(∞) -type models will be employed in this classification. We propose the Fisher discriminant algorithm to find a classifier in the two and three-dimensional spaces of data sets and then provide cross-validation to verify discriminant accuracies. This classification is useful for understanding and predicting the behaviour of different processes within the same market. The fourth part of the thesis investigates the heavy-tailed behaviour of financial processes which may also possess long memory. We consider fractional stochastic differential equations driven by stable noise to model financial processes such as electricity prices. The long memory of electricity prices is represented by a fractional derivative, while the stable noise input models their non-Gaussianity via the tails of their probability density. A method using the empirical densities and MF-DFA will be provided to estimate all the parameters of the model and simulate sample paths of the equation. The method is then applied to analyse daily spot prices for five states of Australia. Comparison with the results obtained from the R/S analysis, periodogram method and MF-DFA are provided. The results from fractional SDEs agree with those from MF-DFA, which are based on multifractal scaling, while those from the periodograms, which are based on the second order, seem to underestimate the long memory dynamics of the process. This highlights the need and usefulness of fractal methods in modelling non-Gaussian financial processes with long memory.
Resumo:
Unmanned Aerial Vehicles (UAVs) are emerging as an ideal platform for a wide range of civil applications such as disaster monitoring, atmospheric observation and outback delivery. However, the operation of UAVs is currently restricted to specially segregated regions of airspace outside of the National Airspace System (NAS). Mission Flight Planning (MFP) is an integral part of UAV operation that addresses some of the requirements (such as safety and the rules of the air) of integrating UAVs in the NAS. Automated MFP is a key enabler for a number of UAV operating scenarios as it aids in increasing the level of onboard autonomy. For example, onboard MFP is required to ensure continued conformance with the NAS integration requirements when there is an outage in the communications link. MFP is a motion planning task concerned with finding a path between a designated start waypoint and goal waypoint. This path is described with a sequence of 4 Dimensional (4D) waypoints (three spatial and one time dimension) or equivalently with a sequence of trajectory segments (or tracks). It is necessary to consider the time dimension as the UAV operates in a dynamic environment. Existing methods for generic motion planning, UAV motion planning and general vehicle motion planning cannot adequately address the requirements of MFP. The flight plan needs to optimise for multiple decision objectives including mission safety objectives, the rules of the air and mission efficiency objectives. Online (in-flight) replanning capability is needed as the UAV operates in a large, dynamic and uncertain outdoor environment. This thesis derives a multi-objective 4D search algorithm entitled Multi- Step A* (MSA*) based on the seminal A* search algorithm. MSA* is proven to find the optimal (least cost) path given a variable successor operator (which enables arbitrary track angle and track velocity resolution). Furthermore, it is shown to be of comparable complexity to multi-objective, vector neighbourhood based A* (Vector A*, an extension of A*). A variable successor operator enables the imposition of a multi-resolution lattice structure on the search space (which results in fewer search nodes). Unlike cell decomposition based methods, soundness is guaranteed with multi-resolution MSA*. MSA* is demonstrated through Monte Carlo simulations to be computationally efficient. It is shown that multi-resolution, lattice based MSA* finds paths of equivalent cost (less than 0.5% difference) to Vector A* (the benchmark) in a third of the computation time (on average). This is the first contribution of the research. The second contribution is the discovery of the additive consistency property for planning with multiple decision objectives. Additive consistency ensures that the planner is not biased (which results in a suboptimal path) by ensuring that the cost of traversing a track using one step equals that of traversing the same track using multiple steps. MSA* mitigates uncertainty through online replanning, Multi-Criteria Decision Making (MCDM) and tolerance. Each trajectory segment is modeled with a cell sequence that completely encloses the trajectory segment. The tolerance, measured as the minimum distance between the track and cell boundaries, is the third major contribution. Even though MSA* is demonstrated for UAV MFP, it is extensible to other 4D vehicle motion planning applications. Finally, the research proposes a self-scheduling replanning architecture for MFP. This architecture replicates the decision strategies of human experts to meet the time constraints of online replanning. Based on a feedback loop, the proposed architecture switches between fast, near-optimal planning and optimal planning to minimise the need for hold manoeuvres. The derived MFP framework is original and shown, through extensive verification and validation, to satisfy the requirements of UAV MFP. As MFP is an enabling factor for operation of UAVs in the NAS, the presented work is both original and significant.
Resumo:
This paper presents the application of advanced optimization techniques to unmanned aerial system mission path planning system (MPPS) using multi-objective evolutionary algorithms (MOEAs). Two types of multi-objective optimizers are compared; the MOEA nondominated sorting genetic algorithm II and a hybrid-game strategy are implemented to produce a set of optimal collision-free trajectories in a three-dimensional environment. The resulting trajectories on a three-dimensional terrain are collision-free and are represented by using Bézier spline curves from start position to target and then target to start position or different positions with altitude constraints. The efficiency of the two optimization methods is compared in terms of computational cost and design quality. Numerical results show the benefits of adding a hybrid-game strategy to a MOEA and for a MPPS.
Resumo:
This paper presents the application of advanced optimization techniques to unmanned aerial system mission path planning system (MPPS) using multi-objective evolutionary algorithms (MOEAs). Two types of multi-objective optimizers are compared; the MOEA nondominated sorting genetic algorithm II and a hybrid-game strategy are implemented to produce a set of optimal collision-free trajectories in a three-dimensional environment. The resulting trajectories on a three-dimensional terrain are collision-free and are represented by using Bézier spline curves from start position to target and then target to start position or different positions with altitude constraints. The efficiency of the two optimization methods is compared in terms of computational cost and design quality. Numerical results show the benefits of adding a hybrid-game strategy to a MOEA and for a MPPS.
Resumo:
The main goal of this research is to design an efficient compression al~ gorithm for fingerprint images. The wavelet transform technique is the principal tool used to reduce interpixel redundancies and to obtain a parsimonious representation for these images. A specific fixed decomposition structure is designed to be used by the wavelet packet in order to save on the computation, transmission, and storage costs. This decomposition structure is based on analysis of information packing performance of several decompositions, two-dimensional power spectral density, effect of each frequency band on the reconstructed image, and the human visual sensitivities. This fixed structure is found to provide the "most" suitable representation for fingerprints, according to the chosen criteria. Different compression techniques are used for different subbands, based on their observed statistics. The decision is based on the effect of each subband on the reconstructed image according to the mean square criteria as well as the sensitivities in human vision. To design an efficient quantization algorithm, a precise model for distribution of the wavelet coefficients is developed. The model is based on the generalized Gaussian distribution. A least squares algorithm on a nonlinear function of the distribution model shape parameter is formulated to estimate the model parameters. A noise shaping bit allocation procedure is then used to assign the bit rate among subbands. To obtain high compression ratios, vector quantization is used. In this work, the lattice vector quantization (LVQ) is chosen because of its superior performance over other types of vector quantizers. The structure of a lattice quantizer is determined by its parameters known as truncation level and scaling factor. In lattice-based compression algorithms reported in the literature the lattice structure is commonly predetermined leading to a nonoptimized quantization approach. In this research, a new technique for determining the lattice parameters is proposed. In the lattice structure design, no assumption about the lattice parameters is made and no training and multi-quantizing is required. The design is based on minimizing the quantization distortion by adapting to the statistical characteristics of the source in each subimage. 11 Abstract Abstract Since LVQ is a multidimensional generalization of uniform quantizers, it produces minimum distortion for inputs with uniform distributions. In order to take advantage of the properties of LVQ and its fast implementation, while considering the i.i.d. nonuniform distribution of wavelet coefficients, the piecewise-uniform pyramid LVQ algorithm is proposed. The proposed algorithm quantizes almost all of source vectors without the need to project these on the lattice outermost shell, while it properly maintains a small codebook size. It also resolves the wedge region problem commonly encountered with sharply distributed random sources. These represent some of the drawbacks of the algorithm proposed by Barlaud [26). The proposed algorithm handles all types of lattices, not only the cubic lattices, as opposed to the algorithms developed by Fischer [29) and Jeong [42). Furthermore, no training and multiquantizing (to determine lattice parameters) is required, as opposed to Powell's algorithm [78). For coefficients with high-frequency content, the positive-negative mean algorithm is proposed to improve the resolution of reconstructed images. For coefficients with low-frequency content, a lossless predictive compression scheme is used to preserve the quality of reconstructed images. A method to reduce bit requirements of necessary side information is also introduced. Lossless entropy coding techniques are subsequently used to remove coding redundancy. The algorithms result in high quality reconstructed images with better compression ratios than other available algorithms. To evaluate the proposed algorithms their objective and subjective performance comparisons with other available techniques are presented. The quality of the reconstructed images is important for a reliable identification. Enhancement and feature extraction on the reconstructed images are also investigated in this research. A structural-based feature extraction algorithm is proposed in which the unique properties of fingerprint textures are used to enhance the images and improve the fidelity of their characteristic features. The ridges are extracted from enhanced grey-level foreground areas based on the local ridge dominant directions. The proposed ridge extraction algorithm, properly preserves the natural shape of grey-level ridges as well as precise locations of the features, as opposed to the ridge extraction algorithm in [81). Furthermore, it is fast and operates only on foreground regions, as opposed to the adaptive floating average thresholding process in [68). Spurious features are subsequently eliminated using the proposed post-processing scheme.