940 resultados para linear approximation method


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Esta dissertação visa deslumbrar uma análise macroeconômica do Brasil, especialmente no que se refere à relação dos índices mensais dos volumes das exportações e das importações com os volumes mensais do PIB, da Taxa SELIC e as Taxas de Câmbio, conforme dados coletados no período de janeiro de 2004 a dezembro de 2014, através de pesquisa literária referente aos históricos sobre cada conceito envolvido no âmbito da macroeconomia das varáveis estudadas. Foi realizado um estudo de caso embasado em dados de sites governamentais, no período delimitado, empregando-se o método de regressão linear, com base na Teoria da correlação de Pearson, demonstrando os resultados obtidos no período do estudo para as varáveis estudadas. Desta maneira, conseguiu-se estudar e analisar como as variáveis dependentes (resposta): volume das exportações e volume das importações estão relacionadas com as varáveis independentes (explicativas): PIB, Taxa Selic e taxa de Câmbio. Os resultados apurados no presente estudo permitem identificar que existe correlação moderada e negativa, quando analisadas a Taxa Selic e a Taxa de Câmbio com os volumes das exportações e das importações, enquanto o PIB apresenta correlação forte e positiva na análise com os volumes das exportações e das importações

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Introdução: A importância da investigação da qualidade de vida de crianças está diretamente relacionada ao fato de que muitos problemas na vida adulta têm sua origem na infância. Objetivo: analisar a contribuição da percepção do relacionamento familiar e do estado nutricional sobre a qualidade de vida de crianças do município de Indaiatuba. Metodologia: Na etapa 1 foi realizado o estudo de validade do instrumento APGAR Familiar adaptado à crianças de 7 a 11 anos, utilizando como medida de confiabilidade a técnica do Teste-Reteste e para a avaliação da validade convergente foi utilizada a Escala de Avaliação de Qualidade de Vida Infantil. Na etapa 2 foram avaliados os determinantes da Qualidade de Vida de crianças sob aspectos familiares, estado nutricional e socioeconômicos e demográficos. Os dados foram analisados por meio de Regressão Linear Múltipla com método dos Mínimos Quadrados Ordinários. Resultados: Na etapa 1 a análise de confiabilidade obteve a correlação de 0,764 do coeficiente de Spearman-Brown. Na análise de Validade Convergente o coeficiente de correlação de Rô de Spearman entre os escores dos dois instrumentos foi de 0,570 (p<0,01). Na etapa 2 foi estimado um modelo dos determinantes da qualidade de vida a partir de uma amostra de 1028 crianças de 7 a 11 anos de ambos os sexos. As variáveis independentes foram capazes de explicar a Qualidade de Vida de crianças a uma significância de 1% (Z = 8,417), sendo o R² ajustado de 0,104. A idade da criança e a percepção do relacionamento familiar foram as variáveis estatisticamente significativas, ao passo que o estado nutricional, o tamanho da família, o sexo da criança, a classe social e a escolaridade do responsável, não foram estatisticamente significantes. Conclusão: O instrumento APGAR Familiar apresentou índices preliminares de validade e precisão; a idade e o relacionamento familiar foram as variáveis que explicaram a qualidade de vida percebida sob a perspectiva subjetiva de bem-estar.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A condutividade hidráulica (K) é um dos parâmetros controladores da magnitude da velocidade da água subterrânea, e consequentemente, é um dos mais importantes parâmetros que afetam o fluxo subterrâneo e o transporte de solutos, sendo de suma importância o conhecimento da distribuição de K. Esse trabalho visa estimar valores de condutividade hidráulica em duas áreas distintas, uma no Sistema Aquífero Guarani (SAG) e outra no Sistema Aquífero Bauru (SAB) por meio de três técnicas geoestatísticas: krigagem ordinária, cokrigagem e simulação condicional por bandas rotativas. Para aumentar a base de dados de valores de K, há um tratamento estatístico dos dados conhecidos. O método de interpolação matemática (krigagem ordinária) e o estocástico (simulação condicional por bandas rotativas) são aplicados para estimar os valores de K diretamente, enquanto que os métodos de krigagem ordinária combinada com regressão linear e cokrigagem permitem incorporar valores de capacidade específica (Q/s) como variável secundária. Adicionalmente, a cada método geoestatístico foi aplicada a técnica de desagrupamento por célula para comparar a sua capacidade de melhorar a performance dos métodos, o que pode ser avaliado por meio da validação cruzada. Os resultados dessas abordagens geoestatísticas indicam que os métodos de simulação condicional por bandas rotativas com a técnica de desagrupamento e de krigagem ordinária combinada com regressão linear sem a técnica de desagrupamento são os mais adequados para as áreas do SAG (rho=0.55) e do SAB (rho=0.44), respectivamente. O tratamento estatístico e a técnica de desagrupamento usados nesse trabalho revelaram-se úteis ferramentas auxiliares para os métodos geoestatísticos.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Deep brain stimulation (DBS) provides significant therapeutic benefit for movement disorders such as Parkinson’s disease (PD). Current DBS devices lack real-time feedback (thus are open loop) and stimulation parameters are adjusted during scheduled visits with a clinician. A closed-loop DBS system may reduce power consumption and side effects by adjusting stimulation parameters based on patient’s behavior. Thus behavior detection is a major step in designing such systems. Various physiological signals can be used to recognize the behaviors. Subthalamic Nucleus (STN) Local field Potential (LFP) is a great candidate signal for the neural feedback, because it can be recorded from the stimulation lead and does not require additional sensors. This thesis proposes novel detection and classification techniques for behavior recognition based on deep brain LFP. Behavior detection from such signals is the vital step in developing the next generation of closed-loop DBS devices. LFP recordings from 13 subjects are utilized in this study to design and evaluate our method. Recordings were performed during the surgery and the subjects were asked to perform various behavioral tasks. Various techniques are used understand how the behaviors modulate the STN. One method studies the time-frequency patterns in the STN LFP during the tasks. Another method measures the temporal inter-hemispheric connectivity of the STN as well as the connectivity between STN and Pre-frontal Cortex (PFC). Experimental results demonstrate that different behaviors create different m odulation patterns in STN and it’s connectivity. We use these patterns as features to classify behaviors. A method for single trial recognition of the patient’s current task is proposed. This method uses wavelet coefficients as features and support vector machine (SVM) as the classifier for recognition of a selection of behaviors: speech, motor, and random. The proposed method is 82.4% accurate for the binary classification and 73.2% for classifying three tasks. As the next step, a practical behavior detection method which asynchronously detects behaviors is proposed. This method does not use any priori knowledge of behavior onsets and is capable of asynchronously detect the finger movements of PD patients. Our study indicates that there is a motor-modulated inter-hemispheric connectivity between LFP signals recorded bilaterally from STN. We utilize a non-linear regression method to measure this inter-hemispheric connectivity and to detect the finger movements. Our experimental results using STN LFP recorded from eight patients with PD demonstrate this is a promising approach for behavior detection and developing novel closed-loop DBS systems.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Climatic changes are most pronounced in northern high latitude regions. Yet, there is a paucity of observational data, both spatially and temporally, such that regional-scale dynamics are not fully captured, limiting our ability to make reliable projections. In this study, a group of dynamical downscaling products were created for the period 1950 to 2100 to better understand climate change and its impacts on hydrology, permafrost, and ecosystems at a resolution suitable for northern Alaska. An ERA-interim reanalysis dataset and the Community Earth System Model (CESM) served as the forcing mechanisms in this dynamical downscaling framework, and the Weather Research & Forecast (WRF) model, embedded with an optimization for the Arctic (Polar WRF), served as the Regional Climate Model (RCM). This downscaled output consists of multiple climatic variables (precipitation, temperature, wind speed, dew point temperature, and surface air pressure) for a 10 km grid spacing at three-hour intervals. The modeling products were evaluated and calibrated using a bias-correction approach. The ERA-interim forced WRF (ERA-WRF) produced reasonable climatic variables as a result, yielding a more closely correlated temperature field than precipitation field when long-term monthly climatology was compared with its forcing and observational data. A linear scaling method then further corrected the bias, based on ERA-interim monthly climatology, and bias-corrected ERA-WRF fields were applied as a reference for calibration of both the historical and the projected CESM forced WRF (CESM-WRF) products. Biases, such as, a cold temperature bias during summer and a warm temperature bias during winter as well as a wet bias for annual precipitation that CESM holds over northern Alaska persisted in CESM-WRF runs. The linear scaling of CESM-WRF eventually produced high-resolution downscaling products for the Alaskan North Slope for hydrological and ecological research, together with the calibrated ERA-WRF run, and its capability extends far beyond that. Other climatic research has been proposed, including exploration of historical and projected climatic extreme events and their possible connections to low-frequency sea-atmospheric oscillations, as well as near-surface permafrost degradation and ice regime shifts of lakes. These dynamically downscaled, bias corrected climatic datasets provide improved spatial and temporal resolution data necessary for ongoing modeling efforts in northern Alaska focused on reconstructing and projecting hydrologic changes, ecosystem processes and responses, and permafrost thermal regimes. The dynamical downscaling methods presented in this study can also be used to create more suitable model input datasets for other sub-regions of the Arctic.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Thesis (Ph.D.)--University of Washington, 2016-06

Relevância:

80.00% 80.00%

Publicador:

Resumo:

To maximise data output from single-shot astronomical images, the rejection of cosmic rays is important. We present the results of a benchmark trial comparing various cosmic ray rejection algorithms. The procedures assess relative performances and characteristics of the processes in cosmic ray detection, rates of false detections of true objects, and the quality of image cleaning and reconstruction. The cosmic ray rejection algorithms developed by Rhoads (2000, PASP, 112, 703), van Dokkum (2001, PASP, 113, 1420), Pych (2004, PASP, 116, 148), and the IRAF task xzap by Dickinson are tested using both simulated and real data. It is found that detection efficiency is independent of the density of cosmic rays in an image, being more strongly affected by the density of real objects in the field. As expected, spurious detections and alterations to real data in the cleaning process are also significantly increased by high object densities. We find the Rhoads' linear filtering method to produce the best performance in the detection of cosmic ray events; however, the popular van Dokkum algorithm exhibits the highest overall performance in terms of detection and cleaning.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Esta dissertação visa deslumbrar uma análise macroeconômica do Brasil, especialmente no que se refere à relação dos índices mensais dos volumes das exportações e das importações com os volumes mensais do PIB, da Taxa SELIC e as Taxas de Câmbio, conforme dados coletados no período de janeiro de 2004 a dezembro de 2014, através de pesquisa literária referente aos históricos sobre cada conceito envolvido no âmbito da macroeconomia das varáveis estudadas. Foi realizado um estudo de caso embasado em dados de sites governamentais, no período delimitado, empregando-se o método de regressão linear, com base na Teoria da correlação de Pearson, demonstrando os resultados obtidos no período do estudo para as varáveis estudadas. Desta maneira, conseguiu-se estudar e analisar como as variáveis dependentes (resposta): volume das exportações e volume das importações estão relacionadas com as varáveis independentes (explicativas): PIB, Taxa Selic e taxa de Câmbio. Os resultados apurados no presente estudo permitem identificar que existe correlação moderada e negativa, quando analisadas a Taxa Selic e a Taxa de Câmbio com os volumes das exportações e das importações, enquanto o PIB apresenta correlação forte e positiva na análise com os volumes das exportações e das importações. Palavras-

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this chapter, we elaborate on the well-known relationship between Gaussian processes (GP) and Support Vector Machines (SVM). Secondly, we present approximate solutions for two computational problems arising in GP and SVM. The first one is the calculation of the posterior mean for GP classifiers using a `naive' mean field approach. The second one is a leave-one-out estimator for the generalization error of SVM based on a linear response method. Simulation results on a benchmark dataset show similar performances for the GP mean field algorithm and the SVM algorithm. The approximate leave-one-out estimator is found to be in very good agreement with the exact leave-one-out error.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper explores the use of the optimisation procedures in SAS/OR software with application to the measurement of efficiency and productivity of decision-making units (DMUs) using data envelopment analysis (DEA) techniques. DEA was originally introduced by Charnes et al. [J. Oper. Res. 2 (1978) 429] is a linear programming method for assessing the efficiency and productivity of DMUs. Over the last two decades, DEA has gained considerable attention as a managerial tool for measuring performance of organisations and it has widely been used for assessing the efficiency of public and private sectors such as banks, airlines, hospitals, universities and manufactures. As a result, new applications with more variables and more complicated models are being introduced. Further to successive development of DEA a non-parametric productivity measure, Malmquist index, has been introduced by Fare et al. [J. Prod. Anal. 3 (1992) 85]. Employing Malmquist index, productivity growth can be decomposed into technical change and efficiency change. On the other hand, the SAS is a powerful software and it is capable of running various optimisation problems such as linear programming with all types of constraints. To facilitate the use of DEA and Malmquist index by SAS users, a SAS/MALM code was implemented in the SAS programming language. The SAS macro developed in this paper selects the chosen variables from a SAS data file and constructs sets of linear-programming models based on the selected DEA. An example is given to illustrate how one could use the code to measure the efficiency and productivity of organisations.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Regression problems are concerned with predicting the values of one or more continuous quantities, given the values of a number of input variables. For virtually every application of regression, however, it is also important to have an indication of the uncertainty in the predictions. Such uncertainties are expressed in terms of the error bars, which specify the standard deviation of the distribution of predictions about the mean. Accurate estimate of error bars is of practical importance especially when safety and reliability is an issue. The Bayesian view of regression leads naturally to two contributions to the error bars. The first arises from the intrinsic noise on the target data, while the second comes from the uncertainty in the values of the model parameters which manifests itself in the finite width of the posterior distribution over the space of these parameters. The Hessian matrix which involves the second derivatives of the error function with respect to the weights is needed for implementing the Bayesian formalism in general and estimating the error bars in particular. A study of different methods for evaluating this matrix is given with special emphasis on the outer product approximation method. The contribution of the uncertainty in model parameters to the error bars is a finite data size effect, which becomes negligible as the number of data points in the training set increases. A study of this contribution is given in relation to the distribution of data in input space. It is shown that the addition of data points to the training set can only reduce the local magnitude of the error bars or leave it unchanged. Using the asymptotic limit of an infinite data set, it is shown that the error bars have an approximate relation to the density of data in input space.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Recent investigations into cross-country convergence follow Mankiw, Romer, and Weil (1992) in using a log-linear approximation to the Swan-Solow growth model to specify regressions. These studies tend to assume a common and exogenous technology. In contrast, the technology catch-up literature endogenises the growth of technology. The use of capital stock data renders the approximations and over-identification of the Mankiw model unnecessary and enables us, using dynamic panel estimation, to estimate the separate contributions of diminishing returns and technology transfer to the rate of conditional convergence. We find that both effects are important.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

* The author was supported by NSF Grant No. DMS 9706883.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Heterogeneous multi-core FPGAs contain different types of cores, which can improve efficiency when used with an effective online task scheduler. However, it is not easy to find the right cores for tasks when there are multiple objectives or dozens of cores. Inappropriate scheduling may cause hot spots which decrease the reliability of the chip. Given that, our research builds a simulating platform to evaluate all kinds of scheduling algorithms on a variety of architectures. On this platform, we provide an online scheduler which uses multi-objective evolutionary algorithm (EA). Comparing the EA and current algorithms such as Predictive Dynamic Thermal Management (PDTM) and Adaptive Temperature Threshold Dynamic Thermal Management (ATDTM), we find some drawbacks in previous work. First, current algorithms are overly dependent on manually set constant parameters. Second, those algorithms neglect optimization for heterogeneous architectures. Third, they use single-objective methods, or use linear weighting method to convert a multi-objective optimization into a single-objective optimization. Unlike other algorithms, the EA is adaptive and does not require resetting parameters when workloads switch from one to another. EAs also improve performance when used on heterogeneous architecture. A efficient Pareto front can be obtained with EAs for the purpose of multiple objectives.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Underwater sound is very important in the field of oceanography where it is used for remote sensing in much the same way that radar is used in atmospheric studies. One way to mathematically model sound propagation in the ocean is by using the parabolic-equation method, a technique that allows range dependent environmental parameters. More importantly, this method can model sound transmission where the source emits either a pure tone or a short pulse of sound. Based on the parabolic approximation method and using the split-step Fourier algorithm, a computer model for underwater sound propagation was designed and implemented. This computer model differs from previous models in its use of the interactive mode, structured programming, modular design, and state-of-the-art graphics displays. In addition, the model maximizes the efficiency of computer time through synchronization of loosely coupled dual processors and the design of a restart capability. Since the model is designed for adaptability and for users with limited computer skills, it is anticipated that it will have many applications in the scientific community.