980 resultados para Autoregressive moving average (ARMA)


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This dissertation introduces the design of a multimodal, adaptive real-time assistive system as an alternate human computer interface that can be used by individuals with severe motor disabilities. The proposed design is based on the integration of a remote eye-gaze tracking system, voice recognition software, and a virtual keyboard. The methodology relies on a user profile that customizes eye gaze tracking using neural networks. The user profiling feature facilitates the notion of universal access to computing resources for a wide range of applications such as web browsing, email, word processing and editing. ^ The study is significant in terms of the integration of key algorithms to yield an adaptable and multimodal interface. The contributions of this dissertation stem from the following accomplishments: (a) establishment of the data transport mechanism between the eye-gaze system and the host computer yielding to a significantly low failure rate of 0.9%; (b) accurate translation of eye data into cursor movement through congregate steps which conclude with calibrated cursor coordinates using an improved conversion function; resulting in an average reduction of 70% of the disparity between the point of gaze and the actual position of the mouse cursor, compared with initial findings; (c) use of both a moving average and a trained neural network in order to minimize the jitter of the mouse cursor, which yield an average jittering reduction of 35%; (d) introduction of a new mathematical methodology to measure the degree of jittering of the mouse trajectory; (e) embedding an onscreen keyboard to facilitate text entry, and a graphical interface that is used to generate user profiles for system adaptability. ^ The adaptability nature of the interface is achieved through the establishment of user profiles, which may contain the jittering and voice characteristics of a particular user as well as a customized list of the most commonly used words ordered according to the user's preferences: in alphabetical or statistical order. This allows the system to successfully provide the capability of interacting with a computer. Every time any of the sub-system is retrained, the accuracy of the interface response improves even more. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In 2009, South American military spending reached a total of $51.8 billion, a fifty percent increased from 2000 expenditures. The five-year moving average of arms transfers to South America was 150 percent higher from 2005 to 2009 than figures for 2000 to 2004.[1] These figures and others have led some observers to conclude that Latin America is engaged in an arms race. Other reasons, however, account for Latin America’s large military expenditure. Among them: Several countries have undertaken long-prolonged modernization efforts, recently made possible by six years of consistent regional growth.[2] A generational shift is at hand. Armed Forces are beginning to shed the stigma and association with past dictatorial regimes.[3] Countries are pursuing specific individual strategies, rather than reacting to purchases made by neighbors. For example, Brazil wants to attain greater control of its Amazon rainforests and offshore territories, Colombia’s spending demonstrates a response to internal threats, and Chile is continuing a modernization process begun in the 1990s.[4] Concerns remain, however: Venezuela continues to demonstrate poor democratic governance and a lack of transparency; neighbor-state relations between Colombia and Venezuela, Peru and Chile, and Bolivia and Paraguay, must all continue to be monitored; and Brazil’s military purchases, although legitimate, will likely result in a large accumulation of equipment.[5] These concerns can be best addressed by strengthening and garnering greater participation in transparent procurement mechanism.[6] The United States can do its part by supporting Latin American efforts to embrace the transparency process. _________________ [1] Bromley, Mark, “An Arms Race in Our Hemisphere? Discussing the Trends and Implications of Military Expenditures in South America,” Brookings Institution Conference, Washington, D.C., June 3rd, 2010, Transcript Pgs. 12,13, and 16 [2] Robledo, Marcos, “The Rearmament Debate: A Chilean Perspective,” Power Point presentation, slide 18, 2010 Western Hemisphere Security Colloquium, Miami, Florida, May 25th-26th, 2010 [3] Yopo, Boris, “¿Carrera Armamentista en la Regiόn?” La Tercera, November 2nd, 2009, http://www.latercera.com/contenido/895_197084_9.shtml, accessed October 8th, 2010 [4] Walser, Ray, “An Arms Race in Our Hemisphere? Discussing the Trends and Implications of Military Expenditures in South America,” Brookings Institution Conference, Washington, D.C., June 3rd, 2010, Transcript Pgs. 49,50,53 and 54 [5] Ibid., Guevara, Iñigo, Pg. 22 [6] Ibid., Bromley, Mark, Pgs. 18 and 19

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The classifier support vector machine is used in several problems in various areas of knowledge. Basically the method used in this classier is to end the hyperplane that maximizes the distance between the groups, to increase the generalization of the classifier. In this work, we treated some problems of binary classification of data obtained by electroencephalography (EEG) and electromyography (EMG) using Support Vector Machine with some complementary techniques, such as: Principal Component Analysis to identify the active regions of the brain, the periodogram method which is obtained by Fourier analysis to help discriminate between groups and Simple Moving Average to eliminate some of the existing noise in the data. It was developed two functions in the software R, for the realization of training tasks and classification. Also, it was proposed two weights systems and a summarized measure to help on deciding in classification of groups. The application of these techniques, weights and the summarized measure in the classier, showed quite satisfactory results, where the best results were an average rate of 95.31% to visual stimuli data, 100% of correct classification for epilepsy data and rates of 91.22% and 96.89% to object motion data for two subjects.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This research explores Bayesian updating as a tool for estimating parameters probabilistically by dynamic analysis of data sequences. Two distinct Bayesian updating methodologies are assessed. The first approach focuses on Bayesian updating of failure rates for primary events in fault trees. A Poisson Exponentially Moving Average (PEWMA) model is implemnented to carry out Bayesian updating of failure rates for individual primary events in the fault tree. To provide a basis for testing of the PEWMA model, a fault tree is developed based on the Texas City Refinery incident which occurred in 2005. A qualitative fault tree analysis is then carried out to obtain a logical expression for the top event. A dynamic Fault Tree analysis is carried out by evaluating the top event probability at each Bayesian updating step by Monte Carlo sampling from posterior failure rate distributions. It is demonstrated that PEWMA modeling is advantageous over conventional conjugate Poisson-Gamma updating techniques when failure data is collected over long time spans. The second approach focuses on Bayesian updating of parameters in non-linear forward models. Specifically, the technique is applied to the hydrocarbon material balance equation. In order to test the accuracy of the implemented Bayesian updating models, a synthetic data set is developed using the Eclipse reservoir simulator. Both structured grid and MCMC sampling based solution techniques are implemented and are shown to model the synthetic data set with good accuracy. Furthermore, a graphical analysis shows that the implemented MCMC model displays good convergence properties. A case study demonstrates that Likelihood variance affects the rate at which the posterior assimilates information from the measured data sequence. Error in the measured data significantly affects the accuracy of the posterior parameter distributions. Increasing the likelihood variance mitigates random measurement errors, but casuses the overall variance of the posterior to increase. Bayesian updating is shown to be advantageous over deterministic regression techniques as it allows for incorporation of prior belief and full modeling uncertainty over the parameter ranges. As such, the Bayesian approach to estimation of parameters in the material balance equation shows utility for incorporation into reservoir engineering workflows.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Paleoenvironmental proxy data for ocean properties, eolian sediment input, and continental rainfall based on high-resolution analyses of sediment cores from the southwestern Black Sea and the northernmost Gulf of Aqaba were used to infer hydroclimatic changes in northern Anatolia and the northern Red Sea region during the last ~7500 years. Pronounced and coherent multicentennial variations in these records reveal patterns that strongly resemble modern temperature and rainfall anomalies related to the Arctic Oscillation/North Atlantic Oscillation (AO/NAO). These patterns suggest a prominent role of AO/NAO-like atmospheric variability during the Holocene beyond interannual to interdecadal timescales, most likely originating from solar output changes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Costs related to inventory are usually a significant amount of the company’s total assets. Despite this, companies in general don’t pay a lot of interest in it, even if the benefits from effective inventory are obvious when it comes to less tied up capital, increased customer satisfaction and better working environment. Permobil AB, Timrå is in an intense period when it comes to revenue and growth. The production unit is aiming for an increased output of 30 % in the next two years. To make this possible the company has to improve their way to distribute and handle material,The purpose of the study is to provide useful information and concrete proposals for action, so that the company can build a strategy for an effective and sustainable solution when it comes to inventory management. Alternative methods for making forecasts are suggested, in order to reach a more nuanced perception of different articles, and how they should be managed. Analytic Hierarchy Process (AHP) was used in order to give specially selected persons the chance to decide criteria for how the article should be valued. The criteria they agreed about were annual volume value, lead time, frequency rate and purchase price. The other method that was proposed was a two-dimensional model where annual volume value and frequency was the criteria that specified in which class an article should be placed. Both methods resulted in significant changes in comparison to the current solution. For the spare part inventory different forecast methods were tested and compared with the current solution. It turned out that the current forecast method performed worse than both moving average and exponential smoothing with trend. The small sample of ten random articles is not big enough to reject the current solution, but still the result is a reason enough, for the company to control the quality of the forecasts.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents the determination of a mean solar radiation year and of a typical meteorological year for the region of Funchal in the Madeira Island, Portugal. The data set includes hourly mean and extreme values for air temperature, relative humidity and wind speed and hourly mean values for solar global and diffuse radiation for the period 2004-2014, with maximum data coverage of 99.7%. The determination of the mean solar radiation year consisted, in a first step, in the average of all values for each pair hour/day and, in a second step, in the application of a five days centred moving average of hourly values. The determination of the typical meteorological year was based on Finkelstein-Schafer statistics, which allows to obtain a complete year of real measurements through the selection and combination of typical months, preserving the long term averages while still allowing the analysis of short term events. The typical meteorological year validation was carried out through the comparison of the monthly averages for the typical year with the long term monthly averages. The values obtained were very close, so that the typical meteorological year can accurately represent the long term data series. The typical meteorological year can be used in the simulation of renewable energy systems, namely solar energy systems, and for predicting the energy performance of buildings.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Before using the basic precipitation data in any agroclimatic study to assess the productivity it is important to check the data series for homogeneity. For this purpose data of 105 locations for the period 1912-1981 over northeast Brazil were used. The preliminary study indicate nonhomogeneity in the time series during 1940's at few locations. The amplitude of variation of time series when taken as 10-year moving average show quite different for different regions. It appears that this amplitude is related to time of onset of effective rains in some extent. There is also great diversity in the fluctuations. They present a great regional diversity. Some diversity. Some of the data in the low latitudes indicate presence of four cycles namely 52, 26, 13 & 6.5. years. The 52-year cycle is also evident in the case of onset of southwest Monsoon over a low latitude zone (Kerala Coast) in India. In the case of south Africa the prominent cycles are 60, 30, 15 & 10 similar situation appears to be present in the higher latitudes of northeast Brazil.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We sought to determine the incidence of V˙O(2) plateau at V˙O(2)max in a cardiovascular-diseased (CVD) population using 4 different sampling intervals (15-breath moving average, 15 s, 30 s, and 60 s) and 3 different V˙O(2) plateau criteria (≤50 mL · min(-1), ≤80 mL · min(-1), and ≤150 mL · min(-1)). A total of 69 people (62 ± 10 yrs.) with recently diagnosed CVD performed a maximal exercise test (10:07 ± 2:24 min) on a treadmill. The test was classified as maximal (n = 57, 2 430 ± 605 mL · min(-1)) if self-terminated due to fatigue or classified as symptom-limited (n = 12, 1 683 ± 438 mL · min(-1)) if symptoms presented. Chi-square analysis revealed a significant (p < 0.05) effect of sampling interval on incidence of V˙O(2) plateau at V˙O(2)max across all 3 V˙O(2) plateau criteria. The sampling interval had an increasingly stronger influence on the incidence of V˙O(2) plateau at V˙O(2)max with smaller criterion thresholds as evidenced by the Cramer's V statistics: [≤50 mL · min(-1) (Cramer's V = 0.548, p < 0.05], ≤80 mL · min(-1) [Cramer's V = 0.489, p < 0.05], ≤150 mL · min(-1) [Cramer's V = 0.214, p < 0.05]. Incidence of V˙O(2) plateau at V˙O(2)max in CVD individuals is significantly influenced by the sampling interval applied. Based on our findings we recommend a15 breath moving average and V˙O(2) plateau criterion of ≤50 mL · min(-1).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We consider the problem of navigating a ying robot to a specific sensor node within a wireless sensor network. This target sensor node periodically sends out beacons. The robot is capable of sensing the received signal strength of each received beacon (RSSI measurements). Existing approaches for solving the sensor spotting problem with RSSI measurements do not deal with noisy channel conditions and/or heavily depend on additional hardware capabilities. In this work we reduce RSSI uctuations due to noise by continuously sampling RSSI values and maintaining an exponential moving average (EMA). The EMA values enable us to detect significant decrease of the received signal strength. In this case it is reasoned that the robot is moving away from the sensor. We present two basic variants to decide a new moving direction when the robot moves away from the sensor. Our simulations show that our approaches outperform competing algorithms in terms of success rate and ight time. Infield experiments with real hardware, a ying robocopter successfully and quickly landed near a sensor placed in an outdoor test environment. Traces show robustness to additional environmental factors not accounted for in our simulations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Physiological signals, which are controlled by the autonomic nervous system (ANS), could be used to detect the affective state of computer users and therefore find applications in medicine and engineering. The Pupil Diameter (PD) seems to provide a strong indication of the affective state, as found by previous research, but it has not been investigated fully yet. In this study, new approaches based on monitoring and processing the PD signal for off-line and on-line affective assessment (“relaxation” vs. “stress”) are proposed. Wavelet denoising and Kalman filtering methods are first used to remove abrupt changes in the raw Pupil Diameter (PD) signal. Then three features (PDmean, PDmax and PDWalsh) are extracted from the preprocessed PD signal for the affective state classification. In order to select more relevant and reliable physiological data for further analysis, two types of data selection methods are applied, which are based on the paired t-test and subject self-evaluation, respectively. In addition, five different kinds of the classifiers are implemented on the selected data, which achieve average accuracies up to 86.43% and 87.20%, respectively. Finally, the receiver operating characteristic (ROC) curve is utilized to investigate the discriminating potential of each individual feature by evaluation of the area under the ROC curve, which reaches values above 0.90. For the on-line affective assessment, a hard threshold is implemented first in order to remove the eye blinks from the PD signal and then a moving average window is utilized to obtain the representative value PDr for every one-second time interval of PD. There are three main steps for the on-line affective assessment algorithm, which are preparation, feature-based decision voting and affective determination. The final results show that the accuracies are 72.30% and 73.55% for the data subsets, which were respectively chosen using two types of data selection methods (paired t-test and subject self-evaluation). In order to further analyze the efficiency of affective recognition through the PD signal, the Galvanic Skin Response (GSR) was also monitored and processed. The highest affective assessment classification rate obtained from GSR processing is only 63.57% (based on the off-line processing algorithm). The overall results confirm that the PD signal should be considered as one of the most powerful physiological signals to involve in future automated real-time affective recognition systems, especially for detecting the “relaxation” vs. “stress” states.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Las organizaciones y sus entornos son sistemas complejos. Tales sistemas son difíciles de comprender y predecir. Pese a ello, la predicción es una tarea fundamental para la gestión empresarial y para la toma de decisiones que implica siempre un riesgo. Los métodos clásicos de predicción (entre los cuales están: la regresión lineal, la Autoregresive Moving Average y el exponential smoothing) establecen supuestos como la linealidad, la estabilidad para ser matemática y computacionalmente tratables. Por diferentes medios, sin embargo, se han demostrado las limitaciones de tales métodos. Pues bien, en las últimas décadas nuevos métodos de predicción han surgido con el fin de abarcar la complejidad de los sistemas organizacionales y sus entornos, antes que evitarla. Entre ellos, los más promisorios son los métodos de predicción bio-inspirados (ej. redes neuronales, algoritmos genéticos /evolutivos y sistemas inmunes artificiales). Este artículo pretende establecer un estado situacional de las aplicaciones actuales y potenciales de los métodos bio-inspirados de predicción en la administración.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Human half-lives of PentaBDE congeners have been estimated from the decline in serum concentrations measured over a 6-12 month period for a population of exchange students moving from North America to Australia. Australian serum PBDE concentrations are typically between 5 -10 times lower than in North America and we can therefore hypothesize that if the biological half-life is sufficiently short we would observe a decline in serum concentration with length of residence in Australia. Thirty students were recruited over a period of 3 years from whom serum were archived every 2 months during their stay in Australia. Australian residents (n=22) were also sampled longitudinally to estimate general population background levels. All serum samples were analyzed by gas chromatography high resolution mass spectrometry. Key findings confirmed that BDE-47 concentrations in the Australians (median 2.3; average 22% decrease compared to 4% for the Australians). Estimated BDE-47 half-life ranged from 0.5 – 2.3 years. The presentation will discuss full results and is the first study to provide in vivo estimates of half-life data of PentaBDE congeners.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nanoparticle deposition behavior observed at the Darcy scale represents an average of the processes occurring at the pore scale. Hence, the effect of various pore-scale parameters on nanoparticle deposition can be understood by studying nanoparticle transport at pore scale and upscaling the results to the Darcy scale. In this work, correlation equations for the deposition rate coefficients of nanoparticles in a cylindrical pore are developed as a function of nine pore-scale parameters: the pore radius, nanoparticle radius, mean flow velocity, solution ionic strength, viscosity, temperature, solution dielectric constant, and nanoparticle and collector surface potentials. Based on dominant processes, the pore space is divided into three different regions, namely, bulk, diffusion, and potential regions. Advection-diffusion equations for nanoparticle transport are prescribed for the bulk and diffusion regions, while the interaction between the diffusion and potential regions is included as a boundary condition. This interaction is modeled as a first-order reversible kinetic adsorption. The expressions for the mass transfer rate coefficients between the diffusion and the potential regions are derived in terms of the interaction energy profile. Among other effects, we account for nanoparticle-collector interaction forces on nanoparticle deposition. The resulting equations are solved numerically for a range of values of pore-scale parameters. The nanoparticle concentration profile obtained for the cylindrical pore is averaged over a moving averaging volume within the pore in order to get the 1-D concentration field. The latter is fitted to the 1-D advection-dispersion equation with an equilibrium or kinetic adsorption model to determine the values of the average deposition rate coefficients. In this study, pore-scale simulations are performed for three values of Peclet number, Pe = 0.05, 5, and 50. We find that under unfavorable conditions, the nanoparticle deposition at pore scale is best described by an equilibrium model at low Peclet numbers (Pe = 0.05) and by a kinetic model at high Peclet numbers (Pe = 50). But, at an intermediate Pe (e.g., near Pe = 5), both equilibrium and kinetic models fit the 1-D concentration field. Correlation equations for the pore-averaged nanoparticle deposition rate coefficients under unfavorable conditions are derived by performing a multiple-linear regression analysis between the estimated deposition rate coefficients for a single pore and various pore-scale parameters. The correlation equations, which follow a power law relation with nine pore-scale parameters, are found to be consistent with the column-scale and pore-scale experimental results, and qualitatively agree with the colloid filtration theory. These equations can be incorporated into pore network models to study the effect of pore-scale parameters on nanoparticle deposition at larger length scales such as Darcy scale.