28 resultados para Vector analysis.


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper proposes an alternative algorithm to solve the median shortest path problem (MSPP) in the planning and design of urban transportation networks. The proposed vector labeling algorithm is based on the labeling of each node in terms of a multiple and conflicting vector of objectives which deletes cyclic, infeasible and extreme-dominated paths in the criteria space imposing cyclic break (CB), path cost constraint (PCC) and access cost parameter (ACP) respectively. The output of the algorithm is a set of Pareto optimal paths (POP) with an objective vector from predetermined origin to destination nodes. Thus, this paper formulates an algorithm to identify a non-inferior solution set of POP based on a non-dominated set of objective vectors that leaves the ultimate decision to decision-makers. A numerical experiment is conducted using an artificial transportation network in order to validate and compare results. Sensitivity analysis has shown that the proposed algorithm is more efficient and advantageous over existing solutions in terms of computing execution time and memory space used.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Urban Sustainability expresses the level of conservation of a city while living a town or consuming its urban resources, but the measurement of urban sustainability depends on what are considered important indicators of conservation besides the permitted levels of consumption in accordance with adopted criteria. This criterion should have common factors that are shared for all the members tested or cities to be evaluated as in this particular case for Abu Dhabi, but also have specific factors that are related to the geographic place, community and culture, that is the measures of urban sustainability specific to a middle east climate, community and culture where GIS Vector and Raster analysis have a role or add a value in urban sustainability measurements or grading are considered herein. Scenarios were tested using various GIS data types to replicate urban history (ten years period), current status and expected future of Abu Dhabi City setting factors to climate, community needs and culture. The useful Vector or Raster GIS data sets that are related to every scenario where selected and analysed in the sense of how and how much it can benefit the urban sustainability ranking in quantity and quality tests, this besides assessing the suitable data nature, type and format, the important topology rules to be considered, the useful attributes to be added, the relationships which should be maintained between data types of a geo- database, and specify its usage in a specific scenario test, then setting weights to each and every data type representing some elements of a phenomenon related to urban suitability factor. The results of assessing the role of GIS analysis provided data collection specifications such as the measures of accuracy reliable to a certain type of GIS functional analysis used in an urban sustainability ranking scenario tests. This paper reflects the prior results of the research that is conducted to test the multidiscipline evaluation of urban sustainability using different indicator metrics, that implement vector GIS Analysis and Raster GIS analysis as basic tools to assist the evaluation and increase of its reliability besides assessing and decomposing it, after which a hypothetical implementation of the chosen evaluation model represented by various scenarios was implemented on the planned urban sustainability factors for a certain period of time to appraise the expected future grade of urban sustainability and come out with advises associated with scenarios for assuring gap filling and relative high urban future sustainability. The results this paper is reflecting are concentrating on the elements of vector and raster GIS analysis that assists the proper urban sustainability grading within the chosen model, the reliability of spatial data collected; analysis selected and resulted spatial information. Starting from selecting some important indicators to comprise the model which include regional culture, climate and community needs an example of what was used is Energy Demand & Consumption (Cooling systems). Thus, this factor is related to the climate and it‟s regional specific as the temperature varies around 30-45 degrees centigrade in city areas, GIS 3D Polygons of building data used to analyse the volume of buildings, attributes „building heights‟, estimate the number of floors from the equation, following energy demand was calculated and consumption for the unit volume, and compared it in scenario with possible sustainable energy supply or using different environmental friendly cooling systems this is followed by calculating the cooling system effects on an area unit selected to be 1 sq. km, combined with the level of greenery area, and open space, as represented by parks polygons, trees polygons, empty areas, pedestrian polygons and road surface area polygons. (initial measures showed that cooling system consumption can be reduced by around 15 -20 % with a well-planned building distributions, proper spaces and with using environmental friendly products and building material, temperature levels were also combined in the scenario extracted from satellite images as interpreted from thermal bands 3 times during the period of assessment. Other examples of the assessment of GIS analysis to urban sustainability took place included Waste Productivity, some effects of greenhouse gases measured by the intensity of road polygons and closeness to dwelling areas, industry areas as defined from land use land cover thematic maps produced from classified satellite images then vectors were created to take part in defining their role within the scenarios. City Noise and light intensity assessment was also investigated, as the region experiences rapid development and noise is magnified due to construction activities, closeness of the airports, and highways. The assessment investigated the measures taken by urban planners to reduce degradation or properly manage it. Finally as a conclusion tables were presented to reflect the scenario results in combination with GIS data types, analysis types, and the level of GIS data reliability to measure the sustainability level of a city related to cultural and regional demands.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Ordinal data is omnipresent in almost all multiuser-generated feedback - questionnaires, preferences etc. This paper investigates modelling of ordinal data with Gaussian restricted Boltzmann machines (RBMs). In particular, we present the model architecture, learning and inference procedures for both vector-variate and matrix-variate ordinal data. We show that our model is able to capture latent opinion profile of citizens around the world, and is competitive against state-of-art collaborative filtering techniques on large-scale public datasets. The model thus has the potential to extend application of RBMs to diverse domains such as recommendation systems, product reviews and expert assessments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The purpose is to explore the inherent complexity of Kurt Lewin's force field theory through applied analysis of organizational case examples and related methods. The methodology applies a range of tools from the consultancy research domain, including force field analysis of complex organizational scenarios, and applies bricolage and corroboration to emerging discoveries from semi-structured interviews, author experience, critical reflection and literature survey. Findings are that linear representation of internal and external forces in organizational applications of field theory does not fully explain the paradox of inverse vectors in the forces of change. The force field is not an impermeable thing; instead, it morphs. Examples of the inverse principle and its effects are detailed and extended in this analysis. The implications of the research are that force field analysis and related change processes promoted in organizational change literature run the risk of missing key complexities. The inclusion of the inverse principle can provide enhanced, holistic understanding of the prevailing forces for change. The augmentation of the early work of Kurt Lewin, and extension of previous analyses of his legacy in the Journal of Change Management and elsewhere, provide, in this article, change analysis insights that align well with current organizational environments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With rising burdens of obesity and chronic disease, the role of diet as a modifiable risk factor is of increasing public health interest. There is a growing body of evidence that low consumption of dairy products is associated with elevated risk of chronic metabolic and cardiovascular disorders. Surveys also suggest that dairy product consumption falls well below recommended targets for much of the population in many countries, including the USA, UK, and Australia. We reviewed the scientific literature on the health effects of dairy product consumption (both positive and negative) and used the best available evidence to estimate the direct healthcare expenditure and burden of disease [disability-adjusted life years (DALY)] attributable to low consumption of dairy products in Australia. We implemented a novel technique for estimating population attributable risk developed for application in nutrition and other areas in which exposure to risk is a continuous variable. We found that in the 2010-2011 financial year, AUD$2.0 billion (USD$2.1 billion, €1.6 billion, or ∼1.7% of direct healthcare expenditure) and the loss of 75,012 DALY were attributable to low dairy product consumption. In sensitivity analyses, varying core assumptions yielded corresponding estimates of AUD$1.1-3.8 billion (0.9-3.3%) and 38,299-151,061 DALY lost. The estimated healthcare cost attributable to low dairy product consumption is comparable with total spending on public health in Australia (AUD$2.0 billion in 2009-2010). These findings justify the development and evaluation of cost-effective interventions that use dairy products as a vector for reducing the costs of diet-related disease.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Reliable forecasting as to the level of aggregate demand for construction is of vital importance to developers, builders and policymakers. Previous construction demand forecasting studies mainly focused on temporal estimating using national aggregate data. The construction market can be better represented by a group of interconnected regions or local markets rather than a national aggregate, and yet regional forecasting techniques have rarely been applied. Furthermore, limited research has applied regional variations in construction markets to construction demand modelling and forecasting. A new comprehensive method is used, a panel vector error correction approach, to forecast regional construction demand using Australia’s state-level data. The links between regional construction demand and general economic indicators are investigated by panel cointegration and causality analysis. The empirical results suggest that both long-run and causal links are found between regional construction demand and construction price, state income, population, unemployment rates and interest rates. The panel vector error correction model can provide reliable and robust forecasting with less than 10% of the mean absolute percentage error for a medium-term trend of regional construction demand and outperforms the conventional forecasting models (panel multiple regression and time series multiple regression model). The key macroeconomic factors of construction demand variations across regions in Australia are also presented. The findings and robust econometric techniques used are valuable to construction economists in examining future construction markets at a regional level.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Texture classification is one of the most important tasks in computer vision field and it has been extensively investigated in the last several decades. Previous texture classification methods mainly used the template matching based methods such as Support Vector Machine and k-Nearest-Neighbour for classification. Given enough training images the state-of-the-art texture classification methods could achieve very high classification accuracies on some benchmark databases. However, when the number of training images is limited, which usually happens in real-world applications because of the high cost of obtaining labelled data, the classification accuracies of those state-of-the-art methods would deteriorate due to the overfitting effect. In this paper we aim to develop a novel framework that could correctly classify textural images with only a small number of training images. By taking into account the repetition and sparsity property of textures we propose a sparse representation based multi-manifold analysis framework for texture classification from few training images. A set of new training samples are generated from each training image by a scale and spatial pyramid, and then the training samples belonging to each class are modelled by a manifold based on sparse representation. We learn a dictionary of sparse representation and a projection matrix for each class and classify the test images based on the projected reconstruction errors. The framework provides a more compact model than the template matching based texture classification methods, and mitigates the overfitting effect. Experimental results show that the proposed method could achieve reasonably high generalization capability even with as few as 3 training images, and significantly outperforms the state-of-the-art texture classification approaches on three benchmark datasets. © 2014 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

 This article examines the short- and long-run causal relationship between energy consumption and GDP of six emerging economies of Asia. Based on cointegration and vector error correction modeling the empirical results show that there exists unidirectional short- and long-run causality running from energy consumption to GDP for China, uni-directional short-run causality from output to energy consumption for India, whilst bi-directional short-run causality for Thailand. Neutrality between energy consumption and income is found for Indonesia, Malaysia and Philippines. Both the generalized variance decompositions and impulse response functions confirm the direction of causality. These findings have important policy implications for the countries concerned. The results suggest that while India may directly initiate energy conservation measures, China and Thailand may opt for a balanced combination of alternative polices.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An approach to EEG signal classification for brain-computer interface (BCI) application using fuzzy standard additive model is introduced in this paper. The Wilcoxon test is employed to rank wavelet coefficients. Top ranking wavelets are used to form a feature set that serves as inputs to the fuzzy classifiers. Experiments are carried out using two benchmark datasets, Ia and Ib, downloaded from the BCI competition II. Prevalent classifiers including feedforward neural network, support vector machine, k-nearest neighbours, ensemble learning Adaboost and adaptive neuro-fuzzy inference system are also implemented for comparisons. Experimental results show the dominance of the proposed method against competing approaches.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The recent upsurge in microbial genome data has revealed that hemoglobin-like (HbL) proteins may be widely distributed among bacteria and that some organisms may carry more than one HbL encoding gene. However, the discovery of HbL proteins has been limited to a small number of bacteria only. This study describes the prediction of HbL proteins and their domain classification using a machine learning approach. Support vector machine (SVM) models were developed for predicting HbL proteins based upon amino acid composition (AC), dipeptide composition (DC), hybrid method (AC + DC), and position specific scoring matrix (PSSM). In addition, we introduce for the first time a new prediction method based on max to min amino acid residue (MM) profiles. The average accuracy, standard deviation (SD), false positive rate (FPR), confusion matrix, and receiver operating characteristic (ROC) were analyzed. We also compared the performance of our proposed models in homology detection databases. The performance of the different approaches was estimated using fivefold cross-validation techniques. Prediction accuracy was further investigated through confusion matrix and ROC curve analysis. All experimental results indicate that the proposed BacHbpred can be a perspective predictor for determination of HbL related proteins. BacHbpred, a web tool, has been developed for HbL prediction.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An accurate estimation of pressure drop due to vehicles inside an urban tunnel plays a pivotal role in tunnel ventilation issue. The main aim of the present study is to utilize computational intelligence technique for predicting pressure drop due to cars in traffic congestion in urban tunnels. A supervised feed forward back propagation neural network is utilized to estimate this pressure drop. The performance of the proposed network structure is examined on the dataset achieved from Computational Fluid Dynamic (CFD) simulation. The input data includes 2 variables, tunnel velocity and tunnel length, which are to be imported to the corresponding algorithm in order to predict presure drop. 10-fold Cross validation technique is utilized for three data mining methods, namely: multi-layer perceptron algorithm, support vector machine regression, and linear regression. A comparison is to be made to show the most accurate results. Simulation results illustrate that the Multi-layer perceptron algorithm is able to accurately estimate the pressure drop.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this article we investigate the theoretical behaviour of finite lag VAR(n) models fitted to time series that in truth come from an infinite order VAR(∞) data generating mechanism. We show that the overall error can be broken down into two basic components, an estimation error that stems from the difference between the parameter estimates and their population ensemble VAR(n) counterparts, and an approximation error that stems from the difference between the VAR(n) and the true VAR(∞). The two sources of error are shown to be present in other performance indicators previously employed in the literature to characterize, so called, truncation effects. Our theoretical analysis indicates that the magnitude of the estimation error exceeds that of the approximation error, but experimental results based upon a prototypical real business cycle model and a practical example indicate that the approximation error approaches its asymptotic position far more slowly than does the estimation error, their relative orders of magnitude notwithstanding. The experimental results suggest that with sample sizes and lag lengths like those commonly employed in practice VAR(n) models are likely to exhibit serious errors of both types when attempting to replicate the dynamics of the true underlying process and that inferences based on VAR(n) models can be very untrustworthy.