808 resultados para Data portal performance
Resumo:
For over twenty years researchers have been recommending that investors diversify their portfolios by adding direct real estate. Based on the tenets of modern portfolio theory (MPT) investors are told that the primary reason they should include direct real estate is that they will enjoy decreased volatility (risk) through increased diversification. However, the MPT methodology hides where this reduction in risk originates. To over come this deficiency we use a four-quadrant approach to break down the co-movement between direct real estate and equities and bonds into negative and positive periods. Then using data for the last 25-years we show that for about 70% of the time a holding in direct real estate would have hurt portfolio returns, i.e. when the other assets showed positive performance. In other words, for only about 30% of the time would a holding in direct real estate lead to improvements in portfolio returns. However, this increase in performance occurs when the alternative asset showed negative returns. In addition, adding direct real estate always leads to reductions in portfolio risk, especially on the downside. In other words, although adding direct real estate helps the investor to avoid large losses it also reduces the potential for large gains. Thus, if the goal of the investor is offsetting losses, then the results show that direct real estate would have been of some benefit. So in answer to the question when does direct real estate improve portfolio performance the answer is on the downside, i.e. when it is most needed.
Resumo:
Models of normal word production are well specified about the effects of frequency of linguistic stimuli on lexical access, but are less clear regarding the same effects on later stages of word production, particularly word articulation. In aphasia, this lack of specificity of down-stream frequency effects is even more noticeable because there is relatively limited amount of data on the time course of frequency effects for this population. This study begins to fill this gap by comparing the effects of variation of word frequency (lexical, whole word) and bigram frequency (sub-lexical, within word) on word production abilities in ten normal speakers and eight mild–moderate individuals with aphasia. In an immediate repetition paradigm, participants repeated single monosyllabic words in which word frequency (high or low) was crossed with bigram frequency (high or low). Indices for mapping the time course for these effects included reaction time (RT) for linguistic processing and motor preparation, and word duration (WD) for speech motor performance (word articulation time). The results indicated that individuals with aphasia had significantly longer RT and WD compared to normal speakers. RT showed a significant main effect only for word frequency (i.e., high-frequency words had shorter RT). WD showed significant main effects of word and bigram frequency; however, contrary to our expectations, high-frequency items had longer WD. Further investigation of WD revealed that independent of the influence of word and bigram frequency, vowel type (tense or lax) had the expected effect on WD. Moreover, individuals with aphasia differed from control speakers in their ability to implement tense vowel duration, even though they could produce an appropriate distinction between tense and lax vowels. The results highlight the importance of using temporal measures to identify subtle deficits in linguistic and speech motor processing in aphasia, the crucial role of phonetic characteristics of stimuli set in studying speech production and the need for the language production models to account more explicitly for word articulation.
Resumo:
The performance of flood inundation models is often assessed using satellite observed data; however these data have inherent uncertainty. In this study we assess the impact of this uncertainty when calibrating a flood inundation model (LISFLOOD-FP) for a flood event in December 2006 on the River Dee, North Wales, UK. The flood extent is delineated from an ERS-2 SAR image of the event using an active contour model (snake), and water levels at the flood margin calculated through intersection of the shoreline vector with LiDAR topographic data. Gauged water levels are used to create a reference water surface slope for comparison with the satellite-derived water levels. Residuals between the satellite observed data points and those from the reference line are spatially clustered into groups of similar values. We show that model calibration achieved using pattern matching of observed and predicted flood extent is negatively influenced by this spatial dependency in the data. By contrast, model calibration using water elevations produces realistic calibrated optimum friction parameters even when spatial dependency is present. To test the impact of removing spatial dependency a new method of evaluating flood inundation model performance is developed by using multiple random subsamples of the water surface elevation data points. By testing for spatial dependency using Moran’s I, multiple subsamples of water elevations that have no significant spatial dependency are selected. The model is then calibrated against these data and the results averaged. This gives a near identical result to calibration using spatially dependent data, but has the advantage of being a statistically robust assessment of model performance in which we can have more confidence. Moreover, by using the variations found in the subsamples of the observed data it is possible to assess the effects of observational uncertainty on the assessment of flooding risk.
Resumo:
This study was undertaken to explore gel permeation chromatography (GPC) for estimating molecular weights of proanthocyanidin fractions isolated from sainfoin (Onobrychis viciifolia). The results were compared with data obtained by thiolytic degradation of the same fractions. Polystyrene, polyethylene glycol and polymethyl methacrylate standards were not suitable for estimating the molecular weights of underivatized proanthocyanidins. Therefore, a novel HPLC-GPC method was developed based on two serially connected PolarGel-L columns using DMF that contained 5% water, 1% acetic acid and 0.15 M LiBr at 0.7 ml/min and 50 degrees C. This yielded a single calibration curve for galloyl glucoses (trigalloyl glucose, pentagalloyl glucose), ellagitannins (pedunculagin, vescalagin, punicalagin, oenothein B, gemin A), proanthocyanidins (procyanidin B2, cinnamtannin B1), and several other polyphenols (catechin, epicatechin gallate, epicallocatechin gallate, amentoflavone). These GPC predicted molecular weights represented a considerable advance over previously reported HPLC-GPC methods for underivatized proanthocyanidins. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
This study focuses on the wealth-protective effects of socially responsible firm behavior by examining the association between corporate social performance (CSP) and financial risk for an extensive panel data sample of S&P 500 companies between the years 1992 and 2009. In addition, the link between CSP and investor utility is investigated. The main findings are that corporate social responsibility is negatively but weakly related to systematic firm risk and that corporate social irresponsibility is positively and strongly related to financial risk. The fact that both conventional and downside risk measures lead to the same conclusions adds convergent validity to the analysis. However, the risk-return trade-off appears to be such that no clear utility gain or loss can be realized by investing in firms characterized by different levels of social and environmental performance. Overall volatility conditions of the financial markets are shown to play a moderating role in the nature and strength of the CSP-risk relationship.
Resumo:
The requirement to forecast volcanic ash concentrations was amplified as a response to the 2010 Eyjafjallajökull eruption when ash safety limits for aviation were introduced in the European area. The ability to provide accurate quantitative forecasts relies to a large extent on the source term which is the emissions of ash as a function of time and height. This study presents source term estimations of the ash emissions from the Eyjafjallajökull eruption derived with an inversion algorithm which constrains modeled ash emissions with satellite observations of volcanic ash. The algorithm is tested with input from two different dispersion models, run on three different meteorological input data sets. The results are robust to which dispersion model and meteorological data are used. Modeled ash concentrations are compared quantitatively to independent measurements from three different research aircraft and one surface measurement station. These comparisons show that the models perform reasonably well in simulating the ash concentrations, and simulations using the source term obtained from the inversion are in overall better agreement with the observations (rank correlation = 0.55, Figure of Merit in Time (FMT) = 25–46%) than simulations using simplified source terms (rank correlation = 0.21, FMT = 20–35%). The vertical structures of the modeled ash clouds mostly agree with lidar observations, and the modeled ash particle size distributions agree reasonably well with observed size distributions. There are occasionally large differences between simulations but the model mean usually outperforms any individual model. The results emphasize the benefits of using an ensemble-based forecast for improved quantification of uncertainties in future ash crises.
Resumo:
This paper describes a simplified dynamic thermal model which simulates the energy and overheating performance of windows. To calculate artificial energy use within a room, the model employs the average illuminance method, which takes into account the daylight energy impacting upon the room by the use of hourly climate data. This tool describes the main thermal performance ( heating, cooling and overheating risk) resulting proposed a design of window. The inputs are fewer and simpler than that are required by complicated simulation programmes. The method is suited for the use of architects and engineers at the strategic phase of design, when little is available.
Resumo:
Recently major processor manufacturers have announced a dramatic shift in their paradigm to increase computing power over the coming years. Instead of focusing on faster clock speeds and more powerful single core CPUs, the trend clearly goes towards multi core systems. This will also result in a paradigm shift for the development of algorithms for computationally expensive tasks, such as data mining applications. Obviously, work on parallel algorithms is not new per se but concentrated efforts in the many application domains are still missing. Multi-core systems, but also clusters of workstations and even large-scale distributed computing infrastructures provide new opportunities and pose new challenges for the design of parallel and distributed algorithms. Since data mining and machine learning systems rely on high performance computing systems, research on the corresponding algorithms must be on the forefront of parallel algorithm research in order to keep pushing data mining and machine learning applications to be more powerful and, especially for the former, interactive. To bring together researchers and practitioners working in this exciting field, a workshop on parallel data mining was organized as part of PKDD/ECML 2006 (Berlin, Germany). The six contributions selected for the program describe various aspects of data mining and machine learning approaches featuring low to high degrees of parallelism: The first contribution focuses the classic problem of distributed association rule mining and focuses on communication efficiency to improve the state of the art. After this a parallelization technique for speeding up decision tree construction by means of thread-level parallelism for shared memory systems is presented. The next paper discusses the design of a parallel approach for dis- tributed memory systems of the frequent subgraphs mining problem. This approach is based on a hierarchical communication topology to solve issues related to multi-domain computational envi- ronments. The forth paper describes the combined use and the customization of software packages to facilitate a top down parallelism in the tuning of Support Vector Machines (SVM) and the next contribution presents an interesting idea concerning parallel training of Conditional Random Fields (CRFs) and motivates their use in labeling sequential data. The last contribution finally focuses on very efficient feature selection. It describes a parallel algorithm for feature selection from random subsets. Selecting the papers included in this volume would not have been possible without the help of an international Program Committee that has provided detailed reviews for each paper. We would like to also thank Matthew Otey who helped with publicity for the workshop.
Resumo:
This paper analyses the historic effects of exchange rate movements on returns, risk and diversification of office markets within the Euro zone in order to gain insights into the investment consequences of conversion to a fixed rate currency regime. The data used in the study represents annual office rental growth rates for 22 European cities from nine European Union countries between 1985 and 1996. Relative performance is reported in terms of domestic currency and in terms of deutsche marks. The evidence presented suggests that Euro zone property investors in ‘southern’ countries are now protected from short term jump risk associated with flexible peg currency arrangements and medium/long-term currency volatility. Historically exchange rate movements have produced decreases in returns and increases in volatility. For northern ‘bloc’ cities, the effects of fixing the exchange rate are minimal. For these cities, national exchange rate fluctuations against the deutsche mark have been minor and the resultant implications for property risk and return to non-domestic SCA investors have been negligible. Moreover, although previous research would suggest that the effect of currency volatility is to decrease market correlation, this cannot be observed within the Euro zone.
Resumo:
Linear models of property market performance may be misspecified if there exist distinct states where the market drivers behave in different ways. This paper examines the applicability of non-linear regime-based models. A Self Exciting Threshold Autoregressive (SETAR) model is applied to property company share data, using the real rate of interest to define regimes. Distinct regimes appear exhibiting markedly different market behaviour. The model both casts doubt on the specification of conventional linear models and offers the possibility of developing effective trading rules for real estate equities.
Resumo:
Active robot force control requires some form of dynamic inner loop control for stability. The author considers the implementation of position-based inner loop control on an industrial robot fitted with encoders only. It is shown that high gain velocity feedback for such a robot, which is effectively stationary when in contact with a stiff environment, involves problems beyond the usual caveats on the effects of unknown environment stiffness. It is shown that it is possible for the controlled joint to become chaotic at very low velocities if encoder edge timing data are used for velocity measurement. The results obtained indicate that there is a lower limit on controlled velocity when encoders are the only means of joint measurement. This lower limit to speed is determined by the desired amount of loop gain, which is itself determined by the severity of the nonlinearities present in the drive system.
Resumo:
In numerical weather prediction (NWP) data assimilation (DA) methods are used to combine available observations with numerical model estimates. This is done by minimising measures of error on both observations and model estimates with more weight given to data that can be more trusted. For any DA method an estimate of the initial forecast error covariance matrix is required. For convective scale data assimilation, however, the properties of the error covariances are not well understood. An effective way to investigate covariance properties in the presence of convection is to use an ensemble-based method for which an estimate of the error covariance is readily available at each time step. In this work, we investigate the performance of the ensemble square root filter (EnSRF) in the presence of cloud growth applied to an idealised 1D convective column model of the atmosphere. We show that the EnSRF performs well in capturing cloud growth, but the ensemble does not cope well with discontinuities introduced into the system by parameterised rain. The state estimates lose accuracy, and more importantly the ensemble is unable to capture the spread (variance) of the estimates correctly. We also find, counter-intuitively, that by reducing the spatial frequency of observations and/or the accuracy of the observations, the ensemble is able to capture the states and their variability successfully across all regimes.
Resumo:
This study jointly examines herding, momentum trading and performance in real estate mutual funds (REMFs). We do this using trading and performance data for 159 REMFs across the period 1998–2008. In support of the view that Real Estate Investment Trust (REIT) stocks are relatively more transparent, we find that stock herding by REMFs is lower in REIT stocks than other stock. Herding behavior in our data reveals a tendency for managers to sell winners, reflective of the “disposition effect.” We find low overall levels of REMF momentum trading, but further evidence of the disposition effect when momentum trading is segregated into buy–sell dimensions. We test the robustness of our analysis using style analysis, and by reference to the level of fund dividend distribution. Our results for this are consistent with our conjecture about the role of transparency in herding, but they provide no new insights in relation to the momentum-trading dimensions of our analysis. Summarizing what are complex interrelationships, we find that neither herding nor momentum trading are demonstrably superior investment strategies for REMFs.
Resumo:
An unlisted property fund is a private investment vehicle which aims to provide direct property total returns and may also employ financial leverage which will accentuate performance. They have become a far more prevalent institutional property investment conduit since the early 2000’s. Investors have been primarily attracted to them due to the ease of executing a property exposure, both domestically and internationally, and for their diversification benefits given the capital intensive nature of constructing a well diversified commercial property investment portfolio. However, despite their greater prominence there has been little academic research conducted on the performance and risks of unlisted property fund investments. This can be attributed to a paucity of available data and limited time series where it exists. In this study we have made use of a unique dataset of institutional UK unlisted non-listed property funds over the period 2003Q4 to 2011Q4, using a panel modelling framework in order to determine the key factors which impact on fund performance. The sample provided a rich set of unlisted property fund factors including market exposures, direct property characteristics and the level of financial leverage employed. The findings from the panel regression analysis show that a small number of variables are able to account for the performance of unlisted property funds. These variables should be considered by investors when assessing the risk and return of these vehicles. The impact of financial leverage upon the performance of these vehicles through the recent global financial crisis and subsequent UK commercial property market downturn was also studied. The findings indicate a significant asymmetric effect of employing debt finance within unlisted property funds.
Resumo:
The nature of private commercial real estate markets presents difficulties for monitoring market performance. Assets are heterogeneous and spatially dispersed, trading is infrequent and there is no central marketplace in which prices and cash flows of properties can be easily observed. Appraisal based indices represent one response to these issues. However, these have been criticised on a number of grounds: that they may understate volatility, lag turning points and be affected by client influence issues. Thus, this paper reports econometrically derived transaction based indices of the UK commercial real estate market using Investment Property Databank (IPD) data, comparing them with published appraisal based indices. The method is similar to that presented by Fisher, Geltner, and Pollakowski (2007) and used by Massachusett, Institute of Technology (MIT) on National Council of Real Estate Investment Fiduciaries (NCREIF) data, although it employs value rather than equal weighting. The results show stronger growth from the transaction based indices in the run up to the peak in the UK market in 2007. They also show that returns from these series are more volatile and less autocorrelated than their appraisal based counterparts, but, surprisingly, differences in turning points were not found. The conclusion then debates the applications and limitations these series have as measures of market performance.