921 resultados para Correlation model
Resumo:
This paper proposes a linear quantile regression analysis method for longitudinal data that combines the between- and within-subject estimating functions, which incorporates the correlations between repeated measurements. Therefore, the proposed method results in more efficient parameter estimation relative to the estimating functions based on an independence working model. To reduce computational burdens, the induced smoothing method is introduced to obtain parameter estimates and their variances. Under some regularity conditions, the estimators derived by the induced smoothing method are consistent and have asymptotically normal distributions. A number of simulation studies are carried out to evaluate the performance of the proposed method. The results indicate that the efficiency gain for the proposed method is substantial especially when strong within correlations exist. Finally, a dataset from the audiology growth research is used to illustrate the proposed methodology.
Resumo:
Attention is a critical mechanism for visual scene analysis. By means of attention, it is possible to break down the analysis of a complex scene to the analysis of its parts through a selection process. Empirical studies demonstrate that attentional selection is conducted on visual objects as a whole. We present a neurocomputational model of object-based selection in the framework of oscillatory correlation. By segmenting an input scene and integrating the segments with their conspicuity obtained from a saliency map, the model selects salient objects rather than salient locations. The proposed system is composed of three modules: a saliency map providing saliency values of image locations, image segmentation for breaking the input scene into a set of objects, and object selection which allows one of the objects of the scene to be selected at a time. This object selection system has been applied to real gray-level and color images and the simulation results show the effectiveness of the system. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
Over the last decades, the analysis of the transmissions of international nancial events has become the subject of many academic studies focused on multivariate volatility models volatility. The goal of this study is to evaluate the nancial contagion between stock market returns. The econometric approach employed was originally presented by Pelletier (2006), named Regime Switching Dynamic Correlation (RSDC). This methodology involves the combination of Constant Conditional Correlation Model (CCC) proposed by Bollerslev (1990) with Markov Regime Switching Model suggested by Hamilton and Susmel (1994). A modi cation was made in the original RSDC model, the introduction of the GJR-GARCH model formulated in Glosten, Jagannathan e Runkle (1993), on the equation of the conditional univariate variances to allow asymmetric e ects in volatility be captured. The database was built with the series of daily closing stock market indices in the United States (SP500), United Kingdom (FTSE100), Brazil (IBOVESPA) and South Korea (KOSPI) for the period from 02/01/2003 to 09/20/2012. Throughout the work the methodology was compared with others most widespread in the literature, and the model RSDC with two regimes was de ned as the most appropriate for the selected sample. The set of results provide evidence for the existence of nancial contagion between markets of the four countries considering the de nition of nancial contagion from the World Bank called very restrictive. Such a conclusion should be evaluated carefully considering the wide diversity of de nitions of contagion in the literature.
Resumo:
Revendo a definição e determinação de bolhas especulativas no contexto de contágio, este estudo analisa a bolha do DotCom nos mercados acionistas americanos e europeus usando o modelo de correlação condicional dinâmica (DCC) proposto por Engle e Sheppard (2001) como uma explicação econométrica e, por outro lado, as finanças comportamentais como uma explicação psicológica. Contágio é definido, neste contexto, como a quebra estatística nos DCC’s estimados, medidos através das alterações das suas médias e medianas. Surpreendentemente, o contágio é menor durante bolhas de preços, sendo que o resultado principal indica a presença de contágio entre os diferentes índices dos dois continentes e demonstra a presença de alterações estruturais durante a crise financeira.
Resumo:
Reviewing the de nition and measurement of speculative bubbles in context of contagion, this paper analyses the DotCom bubble in American and European equity markets using the dynamic conditional correlation (DCC) model proposed by (Engle and Sheppard 2001) as on one hand as an econometrics explanation and on the other hand the behavioral nance as an psychological explanation. Contagion is de ned in this context as the statistical break in the computed DCCs as measured by the shifts in their means and medians. Even it is astonishing, that the contagion is lower during price bubbles, the main nding indicates the presence of contagion in the di¤erent indices among those two continents and proves the presence of structural changes during nancial crisis
Resumo:
Introduction: The In vitro-in vivo pharmacokinetic correlation models (IVIVC) are a fundamental part of the drug discovery and development process. The ability to accurately predict the in vivo pharmacokinetic profile of a drug based on in vitro observations can have several applications during a successful development process. Objective: To develop a comprehensive model to predict the in vivo absorption of antiretroviral drugs based on permeability studies, in vitro and in vivo solubility and demonstrate its correlation with the pharmacokinetic profile in humans. Methods: Analytical tools to test the biopharmaceutical properties of stavudine, lamivudine y zidovudine were developed. The kinetics of dissolution, permeability in caco-2 cells and pharmacokinetics of absorption in rabbits and healthy volunteers were evaluated. Results: The cumulative areas under the curve (AUC) obtained in the permeability study with Caco-2 cells, the dissolution study and the pharmacokinetics in rabbits correlated with the cumulative AUC values in humans. These results demonstrated a direct relation between in vitro data and absorption, both in humans and in the in vivo model. Conclusions: The analytical methods and procedures applied to the development of an IVIVC model showed a strong correlation among themselves. These IVIVC models are proposed as alternative and cost/effective methods to evaluate the biopharmaceutical properties that determine the bioavailability of a drug and their application includes the development process, quality assurance, bioequivalence studies and pharmacosurveillance.
Resumo:
Forecasts of volatility and correlation are important inputs into many practical financial problems. Broadly speaking, there are two ways of generating forecasts of these variables. Firstly, time-series models apply a statistical weighting scheme to historical measurements of the variable of interest. The alternative methodology extracts forecasts from the market traded value of option contracts. An efficient options market should be able to produce superior forecasts as it utilises a larger information set of not only historical information but also the market equilibrium expectation of options market participants. While much research has been conducted into the relative merits of these approaches, this thesis extends the literature along several lines through three empirical studies. Firstly, it is demonstrated that there exist statistically significant benefits to taking the volatility risk premium into account for the implied volatility for the purposes of univariate volatility forecasting. Secondly, high-frequency option implied measures are shown to lead to superior forecasts of the intraday stochastic component of intraday volatility and that these then lead on to superior forecasts of intraday total volatility. Finally, the use of realised and option implied measures of equicorrelation are shown to dominate measures based on daily returns.
Resumo:
Solar plus heat pump systems are often very complex in design, with sometimes special heat pump arrangements and control. Therefore detailed heat pump models can give very slow system simulations and still not so accurate results compared to real heat pump performance in a system. The idea here is to start from a standard measured performance map of test points for a heat pump according to EN 14825 and then determine characteristic parameters for a simplified correlation based model of the heat pump. By plotting heat pump test data in different ways including power input and output form and not only as COP, a simplified relation could be seen. By using the same methodology as in the EN 12975 QDT part in the collector test standard it could be shown that a very simple model could describe the heat pump test data very accurately, by identifying 4 parameters in the correlation equation found. © 2012 The Authors.
Resumo:
Many species of bat use ultrasonic frequency modulated (FM) pulses to measure the distance to objects by timing the emission and reception of each pulse. Echolocation is mainly used in flight. Since the flight speed of bats often exceeds 1% of the speed of sound, Doppler effects will lead to compression of the time between emission and reception as well as an elevation of the echo frequencies, resulting in a distortion of the perceived range. This paper describes the consequences of these Doppler effects on the ranging performance of bats using different pulse designs. The consequences of Doppler effects on ranging performance described in this paper assume bats to have a very accurate ranging resolution, which is feasible with a filterbank receiver. By modeling two receiver types, it was first established that the effects of Doppler compression are virtually independent of the receiver type. Then, used a cross-correlation model was used to investigate the effect of flight speed on Doppler tolerance and range–Doppler coupling separately. This paper further shows how pulse duration, bandwidth, function type, and harmonics influence Doppler tolerance and range–Doppler coupling. The influence of each signal parameter is illustrated using calls of several bat species. It is argued that range–Doppler coupling is a significant source of error in bat echolocation, and various strategies bats could employ to deal with this problem, including the use of range rate information are discussed.
Resumo:
Many species of bat use ultrasonic frequency modulated (FM) pulses to measure the distance to objects by timing the emission and reception of each pulse. Echolocation is mainly used in flight. Since the flight speed of bats often exceeds 1% of the speed of sound,Doppler effects will lead to compression of the time between emission and reception as well as an elevation of the echo frequencies, resulting in a distortion of the perceived range. This paper describes the consequences of these Doppler effects on the ranging performance of bats using different pulse designs. The consequences of Doppler effects on ranging performance described in this paper assume bats to have a very accurate ranging resolution, which is feasible with a filterbank receiver. By modeling two receiver types, it was first established that the effects of Doppler compression are virtually independent of the receiver type. Then, used a cross-correlation model was used to investigate the effect of flight speed on Doppler tolerance and range–Doppler coupling separately. This paper further shows how pulse duration, bandwidth, function type, and harmonics influence Doppler tolerance and range–Doppler coupling. The influence of each signal parameter is illustrated using calls of several bat species. It is argued that range–Doppler coupling is a significant source of error in bat echolocation, and various strategies bats could employ to deal with this problem, including the use of range rate information are discussed.
Resumo:
The importance of modelling correlation has long been recognised in the field of portfolio management, with largedimensional multivariate problems increasingly becoming the focus of research. This paper provides a straightforward and commonsense approach toward investigating a number of models used to generate forecasts of the correlation matrix for large-dimensional problems.We find evidence in favour of assuming equicorrelation across various portfolio sizes, particularly during times of crisis. During periods of market calm, however, the suitability of the constant conditional correlation model cannot be discounted, especially for large portfolios. A portfolio allocation problem is used to compare forecasting methods. The global minimum variance portfolio and Model Confidence Set are used to compare methods, while portfolio weight stability and relative economic value are also considered.
Resumo:
The method of generalized estimating equations (GEEs) provides consistent estimates of the regression parameters in a marginal regression model for longitudinal data, even when the working correlation model is misspecified (Liang and Zeger, 1986). However, the efficiency of a GEE estimate can be seriously affected by the choice of the working correlation model. This study addresses this problem by proposing a hybrid method that combines multiple GEEs based on different working correlation models, using the empirical likelihood method (Qin and Lawless, 1994). Analyses show that this hybrid method is more efficient than a GEE using a misspecified working correlation model. Furthermore, if one of the working correlation structures correctly models the within-subject correlations, then this hybrid method provides the most efficient parameter estimates. In simulations, the hybrid method's finite-sample performance is superior to a GEE under any of the commonly used working correlation models and is almost fully efficient in all scenarios studied. The hybrid method is illustrated using data from a longitudinal study of the respiratory infection rates in 275 Indonesian children.
Resumo:
The A(m) index and molecular connectivity index were used for studying the photoionization sensitivity of some organic compounds in gas chromatography. The analysis of structure-property relationship between the photoionization sensitivity of the compounds and the A(m) indices or molecular connectivity indices has been carried out. The genetic algorighm was used to build the correlation model in this field. The results demonstrate that the property of compounds can be described by both A(m) indices and molecular connectivity indices, and the mathematical model obtained by the genetic algorithm was better than that by multivariate regression analysis.
Resumo:
In the production tail of oilfield, water-cut is very high in thick channel sand oil reservoir, but recovery efficiency is relative low, and recoverable remaining oil reserves is more abundant, so these reserves is potential target of additional development. The remaining oil generally distributed with accumulation in certain areas, controlled by the reservoir architecture that mainly is the lateral accretion shale beddings in the point bar, so the study of reservoir architecture and the remaining oil distribution patterns controlled by architecture are very significant. In this paper, taking the Minghuazhen formation of Gangxi oilfield as a case, using the method of hierarchy analysis, pattern fitting and multidimensional interaction, the architecture of the meandering river reservoir is precisely anatomized, and the remaining oil distribution patterns controlled by the different hierarchy architecture are summarized, which will help to guide the additional development of oil fields. Not only is the study significant to the remaining oil forecasting, but also it is important for the theory development of reservoir geology. With the knowledge of sequence correlation and fluvial correlation model, taking many factors into account, such as combination of well and seismic data, hierarchical controlling, sedimentary facies restraint, performance verification and 3-D closure, an accurate sequence frame of the study area was established. On the basis of high-resolution stratigraphic correlation, single layer and oil sand body are correlated within this frame, and four architecture hierarchies, composite channel, single channels, point bars and lateral accretion sandbody are identified, The result indicates that Minghuazhen Formation of Gangxi oilfield are dominated by meandering river deposition, including two types of channel sandbodies, narrow band and wide band channel sandbody, and each of them has different characteristics of facies variation laterally. Based on the identification of composite channel, according to the spatial combination patterns and identified signs of single channel, combined with channel sandbody distribution and tracer material data, single channel sandbodies are identified. According to empirical formula, point-bar scales of the study area are predicted, and three identification signs are summarized, that is, positive rhythm in depositional sequence, the maximum thick sand and near close to the abandoned channel, and point bars are identified. On the basis of point bar recognition, quantitative architecture models inner point bar are ascertained, taking the lateral accretion sand body and lateral accretion shale beddings in single well as foundation, and quantitative architecture models inner point bar as guidance, and result of tracer material data as controlling, the the lateral accretion sand body and lateral accretion shale beddings are forecasted interwell, so inner architecture of point bar is anatomied. 3-D structural model, 3-D facies model and 3-D petrophysical properties models are set up, spatial distribution characteristics of sedimentary facies and petrophysical properties is reappeared. On the basis of reservoir architecture analysis and performance production data, remaining oil distribution patterns controlled by different hierarchy architecture units, stacked channel, single channel and inner architecture of point bar, are summarized, which will help to guide the additional development of oil fields.
Resumo:
One of the great puzzles in the psychology of visual perception is that the visual world appears to be a coherent whole despite our viewing it through temporally discontinuous series of eye fixations. The investigators attempted to explain this puzzle from the perspective of sequential visual information integration. In recent years, investigators hypothesized that information maintained in the visual short-term memory (VSTM) could become visual mental images gradually during time delay in visual buffer and integrated with information perceived currently. Some elementary studies had been carried out to investigate the integration between VSTM and visual percepts, but further research is required to account for several questions on the spatial-temporal characteristics, information representation and mechanism of integrating sequential visual information. Based on the theory of similarity between visual mental image and visual perception, this research (including three studies) employed the temporal integration paradigm and empty cell localization task to further explore the spatial-temporal characteristics, information representation and mechanism of integrating sequential visual information (sequential arrays). The purpose of study 1 was to further explore the temporal characteristics of sequential visual information integration by examining the effects of encoding time of sequential stimuli on the integration of sequential visual information. The purpose of study 2 was to further explore the spatial characteristics of sequential visual information integration by investigating the effects of spatial characteristics change on the integration of sequential visual information. The purpose of study 3 was to explore the information representation of information maintained in the VSTM and integration mechanism in the process of integrating sequential visual information by employing the behavioral experiments and eye tracking technology. The results indicated that: (1) Sequential arrays could be integrated without strategic instruction. Increasing the duration of the first array could cause improvement in performance and increasing the duration of the second array could not improve the performance. Temporal correlation model was not fit to explain the sequential array integration under long-ISI conditions. (2) Stimuli complexity influenced not only the overall performance of sequential arrays but also the values of ISI at asymptotic level of performance. Sequential arrays still could be integrated when the spatial characteristics of sequential arrays changed. During ISI, constructing and manipulating of visual mental image of array 1 were two separate processing phases. (3) During integrating sequential arrays, people represented the pattern constituted by the objects' image maintained in the VSTM and the topological characteristics of the objects' image had some impact on fixation location. The image-perception integration hypothesis was supported when the number of dots in array 1 was less than empty cells, and the convert-and-compare hypothesis was supported when the number of the dot in array 1 was equal to or more than empty cells. These findings not only contribute to make people understand the process of sequential visual information integration better, but also have significant practical application in the design of visual interface.