96 resultados para least square minimization
Resumo:
Due to the health impacts caused by exposures to air pollutants in urban areas, monitoring and forecasting of air quality parameters have become popular as an important topic in atmospheric and environmental research today. The knowledge on the dynamics and complexity of air pollutants behavior has made artificial intelligence models as a useful tool for a more accurate pollutant concentration prediction. This paper focuses on an innovative method of daily air pollution prediction using combination of Support Vector Machine (SVM) as predictor and Partial Least Square (PLS) as a data selection tool based on the measured values of CO concentrations. The CO concentrations of Rey monitoring station in the south of Tehran, from Jan. 2007 to Feb. 2011, have been used to test the effectiveness of this method. The hourly CO concentrations have been predicted using the SVM and the hybrid PLS–SVM models. Similarly, daily CO concentrations have been predicted based on the aforementioned four years measured data. Results demonstrated that both models have good prediction ability; however the hybrid PLS–SVM has better accuracy. In the analysis presented in this paper, statistic estimators including relative mean errors, root mean squared errors and the mean absolute relative error have been employed to compare performances of the models. It has been concluded that the errors decrease after size reduction and coefficients of determination increase from 56 to 81% for SVM model to 65–85% for hybrid PLS–SVM model respectively. Also it was found that the hybrid PLS–SVM model required lower computational time than SVM model as expected, hence supporting the more accurate and faster prediction ability of hybrid PLS–SVM model.
Resumo:
Determining the key variables of transportation disadvantage remains a great challenge as the variables are commonly selected using ad-hoc techniques. In order to identify the variables, this research develops a transportation disadvantage framework by manipulating the capability approach. Developed framework is statistically analysed using partial least square-based software to determine the framework fitness. The statistical analysis identifies mobility and socioeconomic variables that significantly influence transportation disadvantage. The research reveals the key socioeconomic variables for transportation disadvantage in the case of Brisbane, Australia as household structure, presence of dependent family member, vehicle ownership, and driving licence possession.
Resumo:
The main contribution of this paper is decomposition/separation of the compositie induction motors load from measurement at a system bus. In power system transmission buses load is represented by static and dynamic loads. The induction motor is considered as the main dynamic loads and in the practice for major transmission buses there will be many and various induction motors contributing. Particularly at an industrial bus most of the load is dynamic types. Rather than traing to extract models of many machines this paper seeks to identify three groups of induction motors to represent the dynamic loads. Three groups of induction motors used to characterize the load. These are the small groups (4kw to 11kw), the medium groups (15kw to 180kw) and the large groups (above 630kw). At first these groups with different percentage contribution of each group is composite. After that from the composite models, each motor percentage contribution is decomposed by using the least square algorithms. In power system commercial and the residential buses static loads percentage is higher than the dynamic loads percentage. To apply this theory to other types of buses such as residential and commerical it is good practice to represent the total load as a combination of composite motor loads, constant impedence loads and constant power loads. To validate the theory, the 24hrs of Sydney West data is decomposed according to the three groups of motor models.
Resumo:
Speeding is recognized as a major contributing factor in traffic crashes. In order to reduce speed-related crashes, the city of Scottsdale, Arizona implemented the first fixed-camera photo speed enforcement program (SEP) on a limited access freeway in the US. The 9-month demonstration program spanning from January 2006 to October 2006 was implemented on a 6.5 mile urban freeway segment of Arizona State Route 101 running through Scottsdale. This paper presents the results of a comprehensive analysis of the impact of the SEP on speeding behavior, crashes, and the economic impact of crashes. The impact on speeding behavior was estimated using generalized least square estimation, in which the observed speeds and the speeding frequencies during the program period were compared to those during other periods. The impact of the SEP on crashes was estimated using 3 evaluation methods: a before-and-after (BA) analysis using a comparison group, a BA analysis with traffic flow correction, and an empirical Bayes BA analysis with time-variant safety. The analysis results reveal that speeding detection frequencies (speeds> or =76 mph) increased by a factor of 10.5 after the SEP was (temporarily) terminated. Average speeds in the enforcement zone were reduced by about 9 mph when the SEP was implemented, after accounting for the influence of traffic flow. All crash types were reduced except rear-end crashes, although the estimated magnitude of impact varies across estimation methods (and their corresponding assumptions). When considering Arizona-specific crash related injury costs, the SEP is estimated to yield about $17 million in annual safety benefits.
Resumo:
Computer resource allocation represents a significant challenge particularly for multiprocessor systems, which consist of shared computing resources to be allocated among co-runner processes and threads. While an efficient resource allocation would result in a highly efficient and stable overall multiprocessor system and individual thread performance, ineffective poor resource allocation causes significant performance bottlenecks even for the system with high computing resources. This thesis proposes a cache aware adaptive closed loop scheduling framework as an efficient resource allocation strategy for the highly dynamic resource management problem, which requires instant estimation of highly uncertain and unpredictable resource patterns. Many different approaches to this highly dynamic resource allocation problem have been developed but neither the dynamic nature nor the time-varying and uncertain characteristics of the resource allocation problem is well considered. These approaches facilitate either static and dynamic optimization methods or advanced scheduling algorithms such as the Proportional Fair (PFair) scheduling algorithm. Some of these approaches, which consider the dynamic nature of multiprocessor systems, apply only a basic closed loop system; hence, they fail to take the time-varying and uncertainty of the system into account. Therefore, further research into the multiprocessor resource allocation is required. Our closed loop cache aware adaptive scheduling framework takes the resource availability and the resource usage patterns into account by measuring time-varying factors such as cache miss counts, stalls and instruction counts. More specifically, the cache usage pattern of the thread is identified using QR recursive least square algorithm (RLS) and cache miss count time series statistics. For the identified cache resource dynamics, our closed loop cache aware adaptive scheduling framework enforces instruction fairness for the threads. Fairness in the context of our research project is defined as a resource allocation equity, which reduces corunner thread dependence in a shared resource environment. In this way, instruction count degradation due to shared cache resource conflicts is overcome. In this respect, our closed loop cache aware adaptive scheduling framework contributes to the research field in two major and three minor aspects. The two major contributions lead to the cache aware scheduling system. The first major contribution is the development of the execution fairness algorithm, which degrades the co-runner cache impact on the thread performance. The second contribution is the development of relevant mathematical models, such as thread execution pattern and cache access pattern models, which in fact formulate the execution fairness algorithm in terms of mathematical quantities. Following the development of the cache aware scheduling system, our adaptive self-tuning control framework is constructed to add an adaptive closed loop aspect to the cache aware scheduling system. This control framework in fact consists of two main components: the parameter estimator, and the controller design module. The first minor contribution is the development of the parameter estimators; the QR Recursive Least Square(RLS) algorithm is applied into our closed loop cache aware adaptive scheduling framework to estimate highly uncertain and time-varying cache resource patterns of threads. The second minor contribution is the designing of a controller design module; the algebraic controller design algorithm, Pole Placement, is utilized to design the relevant controller, which is able to provide desired timevarying control action. The adaptive self-tuning control framework and cache aware scheduling system in fact constitute our final framework, closed loop cache aware adaptive scheduling framework. The third minor contribution is to validate this cache aware adaptive closed loop scheduling framework efficiency in overwhelming the co-runner cache dependency. The timeseries statistical counters are developed for M-Sim Multi-Core Simulator; and the theoretical findings and mathematical formulations are applied as MATLAB m-file software codes. In this way, the overall framework is tested and experiment outcomes are analyzed. According to our experiment outcomes, it is concluded that our closed loop cache aware adaptive scheduling framework successfully drives co-runner cache dependent thread instruction count to co-runner independent instruction count with an error margin up to 25% in case cache is highly utilized. In addition, thread cache access pattern is also estimated with 75% accuracy.
Resumo:
Traffic generated semi and non volatile organic compounds (SVOCs and NVOCs) pose a serious threat to human and ecosystem health when washed off into receiving water bodies by stormwater. Climate change influenced rainfall characteristics makes the estimation of these pollutants in stormwater quite complex. The research study discussed in the paper developed a prediction framework for such pollutants under the dynamic influence of climate change on rainfall characteristics. It was established through principal component analysis (PCA) that the intensity and durations of low to moderate rain events induced by climate change mainly affect the wash-off of SVOCs and NVOCs from urban roads. The study outcomes were able to overcome the limitations of stringent laboratory preparation of calibration matrices by extracting uncorrelated underlying factors in the data matrices through systematic application of PCA and factor analysis (FA). Based on the initial findings from PCA and FA, the framework incorporated orthogonal rotatable central composite experimental design to set up calibration matrices and partial least square regression to identify significant variables in predicting the target SVOCs and NVOCs in four particulate fractions ranging from >300-1 μm and one dissolved fraction of <1 μm. For the particulate fractions range >300-1 μm, similar distributions of predicted and observed concentrations of the target compounds from minimum to 75th percentile were achieved. The inter-event coefficient of variations for particulate fractions of >300-1 μm were 5% to 25%. The limited solubility of the target compounds in stormwater restricted the predictive capacity of the proposed method for the dissolved fraction of <1 μm.
Resumo:
This paper presents a solution to the problem of estimating the monotonous tendency of a slow-varying oscillating system. A recursive Prony Analysis (PA) scheme is developed which involves obtaining a dynamic model with parameters identified by implementing the forgetting factor recursive least square (FFRLS) method. A box threshold principle is proposed to separate the dominant components, which results in an accurate estimation of the trend of oscillating systems. Performance of the proposed PA is evaluated using real-time measurements when random noise and vibration effects are present. Moreover, the proposed method is used to estimate monotonous tendency of deck displacement to assist in a safe landing of an unmanned aerial vehicle (UAV). It is shown that the proposed method can estimate instantaneous mean deck satisfactorily, making it well suited for integration into ship-UAV approach and landing guidance systems.
Resumo:
This paper outlines a feasible scheme to extract deck trend when a rotary-wing unmanned aerial vehicle (RUAV)approaches an oscillating deck. An extended Kalman filter (EKF) is de- veloped to fuse measurements from multiple sensors for effective estimation of the unknown deck heave motion. Also, a recursive Prony Analysis (PA) procedure is proposed to implement online curve-fitting of the estimated heave mo- tion. The proposed PA constructs an appropriate model with parameters identified using the forgetting factor recursive least square (FFRLS)method. The deck trend is then extracted by separating dominant modes. Performance of the proposed procedure is evaluated using real ship motion data, and simulation results justify the suitability of the proposed method into safe landing of RUAVs operating in a maritime environment.
Resumo:
Reliable ambiguity resolution (AR) is essential to Real-Time Kinematic (RTK) positioning and its applications, since incorrect ambiguity fixing can lead to largely biased positioning solutions. A partial ambiguity fixing technique is developed to improve the reliability of AR, involving partial ambiguity decorrelation (PAD) and partial ambiguity resolution (PAR). Decorrelation transformation could substantially amplify the biases in the phase measurements. The purpose of PAD is to find the optimum trade-off between decorrelation and worst-case bias amplification. The concept of PAR refers to the case where only a subset of the ambiguities can be fixed correctly to their integers in the integer least-squares (ILS) estimation system at high success rates. As a result, RTK solutions can be derived from these integer-fixed phase measurements. This is meaningful provided that the number of reliably resolved phase measurements is sufficiently large for least-square estimation of RTK solutions as well. Considering the GPS constellation alone, partially fixed measurements are often insufficient for positioning. The AR reliability is usually characterised by the AR success rate. In this contribution an AR validation decision matrix is firstly introduced to understand the impact of success rate. Moreover the AR risk probability is included into a more complete evaluation of the AR reliability. We use 16 ambiguity variance-covariance matrices with different levels of success rate to analyse the relation between success rate and AR risk probability. Next, the paper examines during the PAD process, how a bias in one measurement is propagated and amplified onto many others, leading to more than one wrong integer and to affect the success probability. Furthermore, the paper proposes a partial ambiguity fixing procedure with a predefined success rate criterion and ratio-test in the ambiguity validation process. In this paper, the Galileo constellation data is tested with simulated observations. Numerical results from our experiment clearly demonstrate that only when the computed success rate is very high, the AR validation can provide decisions about the correctness of AR which are close to real world, with both low AR risk and false alarm probabilities. The results also indicate that the PAR procedure can automatically chose adequate number of ambiguities to fix at given high-success rate from the multiple constellations instead of fixing all the ambiguities. This is a benefit that multiple GNSS constellations can offer.
A multivariate approach to the identification of surrogate parameters for heavy metals in stormwater
Resumo:
Stormwater is a potential and readily available alternative source for potable water in urban areas. However, its direct use is severely constrained by the presence of toxic pollutants, such as heavy metals (HMs). The presence of HMs in stormwater is of concern because of their chronic toxicity and persistent nature. In addition to human health impacts, metals can contribute to adverse ecosystem health impact on receiving waters. Therefore, the ability to predict the levels of HMs in stormwater is crucial for monitoring stormwater quality and for the design of effective treatment systems. Unfortunately, the current laboratory methods for determining HM concentrations are resource intensive and time consuming. In this paper, applications of multivariate data analysis techniques are presented to identify potential surrogate parameters which can be used to determine HM concentrations in stormwater. Accordingly, partial least squares was applied to identify a suite of physicochemical parameters which can serve as indicators of HMs. Datasets having varied characteristics, such as land use and particle size distribution of solids, were analyzed to validate the efficacy of the influencing parameters. Iron, manganese, total organic carbon, and inorganic carbon were identified as the predominant parameters that correlate with the HM concentrations. The practical extension of the study outcomes to urban stormwater management is also discussed.
Resumo:
Dynamic capabilities are widely considered to incorporate those processes that enable organizations to sustain superior performance over time. In this paper, we argue theoretically and demonstrate empirically that these effects are contingent on organizational structure and the competitive intensity in the market. Results from partial least square structural equation modeling (PLS-SEM) analyses indicate that organic organizational structures facilitate the impact of dynamic capabilities on organizational performance. Furthermore, we find that the performance effects of dynamic capabilities are contingent on the competitive intensity faced by firms. Our findings demonstrate the performance effects of internal alignment between organizational structure and dynamic capabilities, as well as the external fit of dynamic capabilities with competitive intensity. We outline the advantages of PLS-SEM for modeling latent constructs, such as dynamic capabilities, and conclude with managerial implications.
Resumo:
This paper proposes a practical prediction procedure for vertical displacement of a Rotarywing Unmanned Aerial Vehicle (RUAV) landing deck in the presence of stochastic sea state disturbances. A proper time series model tending to capture characteristics of the dynamic relationship between an observer and a landing deck is constructed, with model orders determined by a novel principle based on Bayes Information Criterion (BIC) and coefficients identified using the Forgetting Factor Recursive Least Square (FFRLS) method. In addition, a fast-converging online multi-step predictor is developed, which can be implemented more rapidly than the Auto-Regressive (AR) predictor as it requires less memory allocations when updating coefficients. Simulation results demonstrate that the proposed prediction approach exhibits satisfactory prediction performance, making it suitable for integration into ship-helicopter approach and landing guidance systems in consideration of computational capacity of the flight computer.
Resumo:
This paper presents a novel and practical procedure for estimating the mean deck height to assist in automatic landing operations of a Rotorcraft Unmanned Aerial Vehicle (RUAV) in harsh sea environments. A modified Prony Analysis (PA) procedure is outlined to deal with real-time observations of deck displacement, which involves developing an appropriate dynamic model to approach real deck motion with parameters identified through implementing the Forgetting Factor Recursive Least Square (FFRLS) method. The model order is specified using a proper order-selection criterion based on minimizing the summation of accumulated estimation errors. In addition, a feasible threshold criterion is proposed to separate the dominant components of deck displacement, which results in an accurate instantaneous estimation of the mean deck position. Simulation results demonstrate that the proposed recursive procedure exhibits satisfactory estimation performance when applied to real-time deck displacement measurements, making it well suited for integration into ship-RUAV approach and landing guidance systems.