84 resultados para least-square


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Bio-kinematic characterisations of human exercises constitute dealing with parameters such as velocity, acceleration, joint angles, etc. A majority of these are measured directly from various sensors ranging from RGB cameras to inertial sensors. However, due to certain limitations associated with these sensors, such as inherent noise, filters are required to be implemented to subjugate the effect from the noise. When the two-component (trajectory shape and dynamics) bio-kinematic encoding model is being established to represent an exercise, reducing the effect from noise embedded in raw data will be important since the underlying model can be quite sensitive to noise. In this paper, we examine and compare some commonly used filters, namely least-square Gaussian filter, Savitzky-Golay filter and optimal Kalman filter, with four groups of real data collected from Microsoft Kinectc , and assert that Savitzky- Golay filter is the best one when establishing an underlying model for human exercise representation.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

To evaluate subjective ocular comfort across the day with three silicone hydrogel daily disposables (SHDDs) in a group of adapted lens wearers. Masked subjects (asymptomatic or symptomatic of end-of-day (EOD) dryness with habitual lenses) wore three SHDDs: DAILIES TOTAL1 (DT1), Clariti 1day (C1D), or 1-DAY ACUVUE TRUEYE (AVTE), each for 3 days. On day 2, wearing time (WT) and comfort ratings after insertion, at 4, 8, and 12 hours, and at EOD were recorded. Because not all subjects wore lenses for 12 hours, comfort was analyzed across the day (up to 8 hours, 8 to 12 hours), and a new variable (“cumulative comfort” [CC]) was calculated for EOD. One hundred four subjects completed the study (51 asymptomatic, 53 symptomatic). The two groups had different WTs (mean WT, 14.0 and 12.7 hours, respectively; p < 0.001). Ocular comfort was rated higher in the asymptomatic group throughout the day (p < 0.001). One hundred four subjects wore all three SHDDs for at least 8 hours, whereas 74 (45 asymptomatic, 29 symptomatic) subjects wore them for 12 hours or longer. Comfort ratings were higher with DT1 (least square means [LSM] = 91.0) than with C1D (LSM = 86.5; p < 0.001) and AVTE (LSM = 87.7; p = 0.011) for the first 8 hours and lower with C1D compared with DT1 (p = 0.012) from 8 to 12 hours. Mean EOD (± SD) comfort with the C1D lens was 72 ± 21, lower than both DT1 (mean, 79 ± 17; p = 0.001) and AVTE (mean, 78 ± 21; p = 0.010). Mean CC was higher in the asymptomatic group (mean, 1261 ± 59) compared with that in the symptomatic group (mean, 1009 ± 58; p < 0.001) and higher for DT1 (mean, 1184 ± 258) than C1D (mean, 1094 ± 318; p = 0.002) and AVTE (mean, 1122 ± 297; p = 0.046). All three SHDDs had average WTs of 12 hours or longer for 1 day. Comfort during the first 12 hours was highest with DT1 (similar to AVTE between 8 and 12 hours) and lowest with C1D. End-of-day comfort was lowest with C1D, and CC was highest for DT1. Cumulative comfort may be a valuable new metric to assess ocular comfort during the day.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Business intelligence technologies have received much attention recently from both academics and practitioners. However, the impact of business intelligence (BI) on corporate performance management (CPM) has not yet been investigated. To address this gap, we conducted a large-scale survey collecting data from 337 senior managers. Partial least square method was employed to analyse the survey data. Findings suggest that the more effective the BI implementation, the more effective the CPM-related planning and analytic practices. Interestingly, size and industry sector do not influence the relationships between BI effectiveness and the CPM. This research offers a number of implications for theory and practice.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A rapid analytical approach for discrimination and quantitative determination of polyunsaturated fatty acid (PUFA) contents, particularly eicosapentaenoic acid (EPA) and docosahexaenoic acid (DHA), in a range of oils extracted from marine resources has been developed by using attenuated total reflection Fourier transform infrared spectroscopy and multivariate data analysis. The spectral data were collected without any sample preparation; thus, no chemical preparation was involved, but data were rather processed directly using the developed spectral analysis platform, making it fast, very cost effective, and suitable for routine use in various biotechnological and food research and related industries. Unsupervised pattern recognition techniques, including principal component analysis and unsupervised hierarchical cluster analysis, discriminated the marine oils into groups by correlating similarities and differences in their fatty acid (FA) compositions that corresponded well to the FA profiles obtained from traditional lipid analysis based on gas chromatography (GC). Furthermore, quantitative determination of unsaturated fatty acids, PUFAs, EPA and DHA, by partial least square regression analysis through which calibration models were optimized specifically for each targeted FA, was performed in both known marine oils and totally independent unknown n - 3 oil samples obtained from an actual commercial product in order to provide prospective testing of the developed models towards actual applications. The resultant predicted FAs were achieved at a good accuracy compared to their reference GC values as evidenced through (1) low root mean square error of prediction, (2) good coefficient of determination close to 1 (i.e., R 2≥ 0.96), and (3) the residual predictive deviation values that indicated the predictive power at good and higher levels for all the target FAs. © 2014 Springer Science+Business Media New York.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The Empirical Mode Decomposition (EMD) method is a commonly used method for solving the problem of single channel blind source separation (SCBSS) in signal processing. However, the mixing vector of SCBSS, which is the base of the EMD method, has not yet been effectively constructed. The mixing vector reflects the weights of original signal sources that form the single channel blind signal source. In this paper, we propose a novel method to construct a mixing vector for a single channel blind signal source to approximate the actual mixing vector in terms of keeping the same ratios between signal weights. The constructed mixing vector can be used to improve signal separations. Our method incorporates the adaptive filter, least square method, EMD method and signal source samples to construct the mixing vector. Experimental tests using audio signal evaluations were conducted and the results indicated that our method can improve the similar values of sources energy ratio from 0.2644 to 0.8366. This kind of recognition is very important in weak signal detection.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This article analyses the determinants of renewable energy consumption in six major emerging economies who are proactively accelerating the adoption of renewable energy. The long-run elasticities from both panel methods (fully modified ordinary least square and dynamic least square) and the time series method (autoregressive distributed lag) seem to be pretty consistent. For Brazil, China, India and Indonesia, in the long-run, renewable energy consumption is significantly determined by income and pollutant emission. However, for Philippines and Turkey, income seems to be the main driver for renewable energy consumption. In the short-run, for Brazil and China bi-directional causalities between renewable energy and income; and between renewable energy and pollutant emission are found. This research justifies the efforts undertaken by emerging countries to reduce the carbon intensity by increasing the energy efficiency and substantially increasing the share of renewable in the overall energy mix

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Heterogeneous deformation developed during "static recrystallization (SRX) tests" poses serious questions about the validity of the conventional methods to measure softening fraction. The challenges to measure SRX and verify a proposed kinetic model of SRX are discussed and a least square technique is utilized to quantify the error in a proposed SRX kinetic model. This technique relies on an existing computational-experimental multi-layer formulation to account for the heterogeneity during the post interruption hot torsion deformation. The kinetics of static recrystallization for a type 304 austenitic stainless steel deformed at 900 °C and strain rate of 0.01s-1 is characterized implementing the formulation. Minimizing the error between the measured and calculated torque-twist data, the parameters of the kinetic model and the flow behavior during the second hit are evaluated and compared with those obtained based on a conventional technique. Typical static recrystallization distributions in the test sample will be presented. It has been found that the major differences between the conventional and the presented technique results are due to the heterogeneous recrystallization in the cylindrical core of the specimen where the material is still partially recrystallized at the onset of the second hit deformation. For the investigated experimental conditions, the core is confined in the first two-thirds of the gauge radius, when the holding time is shorter than 50 s and the maximum pre-strain is about 0.5.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Industrial producers face the task of optimizing production process in an attempt to achieve the desired quality such as mechanical properties with the lowest energy consumption. In industrial carbon fiber production, the fibers are processed in bundles containing (batches) several thousand filaments and consequently the energy optimization will be a stochastic process as it involves uncertainty, imprecision or randomness. This paper presents a stochastic optimization model to reduce energy consumption a given range of desired mechanical properties. Several processing condition sets are developed and for each set of conditions, 50 samples of fiber are analyzed for their tensile strength and modulus. The energy consumption during production of the samples is carefully monitored on the processing equipment. Then, five standard distribution functions are examined to determine those which can best describe the distribution of mechanical properties of filaments. To verify the distribution goodness of fit and correlation statistics, the Kolmogorov-Smirnov test is used. In order to estimate the selected distribution (Weibull) parameters, the maximum likelihood, least square and genetic algorithm methods are compared. An array of factors including the sample size, the confidence level, and relative error of estimated parameters are used for evaluating the tensile strength and modulus properties. The energy consumption and N2 gas cost are modeled by Convex Hull method. Finally, in order to optimize the carbon fiber production quality and its energy consumption and total cost, mixed integer linear programming is utilized. The results show that using the stochastic optimization models, we are able to predict the production quality in a given range and minimize the energy consumption of its industrial process.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Extracellular data analysis has become a quintessential method for understanding the neurophysiological responses to stimuli. This demands stringent techniques owing to the complicated nature of the recording environment. In this paper, we highlight the challenges in extracellular multi-electrode recording and data analysis as well as the limitations pertaining to some of the currently employed methodologies. To address some of the challenges, we present a unified algorithm in the form of selective sorting. Selective sorting is modelled around hypothesized generative model, which addresses the natural phenomena of spikes triggered by an intricate neuronal population. The algorithm incorporates Cepstrum of Bispectrum, ad hoc clustering algorithms, wavelet transforms, least square and correlation concepts which strategically tailors a sequence to characterize and form distinctive clusters. Additionally, we demonstrate the influence of noise modelled wavelets to sort overlapping spikes. The algorithm is evaluated using both raw and synthesized data sets with different levels of complexity and the performances are tabulated for comparison using widely accepted qualitative and quantitative indicators.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Health analysis often involves prediction of multiple outcomes of mixed-type. Existing work is restrictive to either a limited number or specific outcome types. We propose a framework for mixed-type multi-outcome prediction. Our proposed framework proposes a cumulative loss function composed of a specific loss function for each outcome type - as an example, least square (continuous outcome), hinge (binary outcome), poisson (count outcome) and exponential (non-negative outcome). Tomodel these outcomes jointly, we impose a commonality across the prediction parameters through a common matrix-Normal prior. The framework is formulated as iterative optimization problems and solved using an efficient Block coordinate descent method (BCD). We empirically demonstrate both scalability and convergence. We apply the proposed model to a synthetic dataset and then on two real-world cohorts: a Cancer cohort and an Acute Myocardial Infarction cohort collected over a two year period. We predict multiple emergency related outcomes - as example, future emergency presentations (binary), emergency admissions (count), emergency length-of-stay-days (non-negative) and emergency time-to-next-admission-day (non-negative). Weshow that the predictive performance of the proposed model is better than several state-of-the-art baselines.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We examine the important roles of two forms of capital—human andsocial—in the accumulation of critical resources that enable firms to adopt soundenvironmental management practices which contribute to better firm performance.Drawing on human and social capital theories and the resource-based view of the firm,we tested this proposition using data from a survey of 141 small manufacturing firmsdrawn from a survey of business enterprises in a metropolitan city in the southernregion of the Philippines. The results of our analysis using structural equationmodelling-partial least square approach show that both human capital such as age,experience and education of managers of the firm and social capital such as externalmanagerial ties and networks have significant and positive contribution to the environmentalmanagement resources of firms although the effects vary in magnitude. Theaccumulation of environmental management resources not only is positively linked tothe adoption by firms of pro-environment practices but also fully mediates the effects ofthe two types of capital on the adoption of such practices. Pro-environment practicesare positively linked to better performance outcomes. The findings underscore the needto account for the intangible and more tacit forms of capital such as managerial talent,knowledge, skills and social ties and networks in the wider debate on how smallmanufacturing firms in developing countries can address the pressing need to integrateenvironmental sustainability in business.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The use of commodity, currency and stock index futures to hedge risky exposures in the underlying assets is well documented in financial literature. However single stock futures are a relatively new addition to the family of futures and as such, academic research on its use as a hedging tool is relatively thin. In this study we have explored the efficacy of two different methodological approaches that may be applied when hedging a long position in the underlying stock with a single stock future. We use daily trading data covering years 2002 to 2007 from the Indian market, where single stock futures have been really thriving in terms of volume of trade, to extract the optimal hedge ratios using both static OLS as well as 30-day, 60-day and 90-day moving least squares. The method of moving least squares has been in use by market practitioners for some time primarily as a trend analysis and charting tool. Our results indicate that the moving least squares approach outperforms the static OLS in terms of the hedging efficiency, which has been measured by the root mean square hedging error.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Adaptive filters are now becoming increasingly studied for their suitability in application to complex and non-stationary signals. Many adaptive filters utilise a reference input, that is used to form an estimate of the noise in the target signal. In this paper we discuss the application of adaptive filters for high electromyography contaminated electroencephalography data. We propose the use of multiple referential inputs instead of the traditional single input. These references are formed using multiple EMG sensors during an EEG experiment, each reference input is processed and ordered through firstly determining the Pearson’s r-squared correlation coefficient, from this a weighting metric is determined and used to scale and order the reference channels according to the paradigm shown in this paper. This paper presents the use and application of the Adaptive-Multi-Reference (AMR) Least Means Square adaptive filter in the domain of electroencephalograph signal acquisition.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The least-mean-square-type (LMS-type) algorithms are known as simple and effective adaptation algorithms. However, the LMS-type algorithms have a trade-off between the convergence rate and steady-state performance. In this paper, we investigate a new variable step-size approach to achieve fast convergence rate and low steady-state misadjustment. By approximating the optimal step-size that minimizes the mean-square deviation, we derive variable step-sizes for both the time-domain normalized LMS (NLMS) algorithm and the transform-domain LMS (TDLMS) algorithm. The proposed variable step-sizes are simple quotient forms of the filtered versions of the quadratic error and very effective for the NLMS and TDLMS algorithms. The computer simulations are demonstrated in the framework of adaptive system modeling. Superior performance is obtained compared to the existing popular variable step-size approaches of the NLMS and TDLMS algorithms. © 2014 Springer Science+Business Media New York.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The need for monotone approximation of scattered data often arises in many problems of regression, when the monotonicity is semantically important. One such domain is fuzzy set theory, where membership functions and aggregation operators are order preserving. Least squares polynomial splines provide great flexbility when modeling non-linear functions, but may fail to be monotone. Linear restrictions on spline coefficients provide necessary and sufficient conditions for spline monotonicity. The basis for splines is selected in such a way that these restrictions take an especially simple form. The resulting non-negative least squares problem can be solved by a variety of standard proven techniques. Additional interpolation requirements can also be imposed in the same framework. The method is applied to fuzzy systems, where membership functions and aggregation operators are constructed from empirical data.