904 resultados para Correlation indices
Resumo:
The meltabilities of 14 process cheese samples were determined at 2 and 4 weeks after manufacture using sensory analysis, a computer vision method, and the Olson and Price test. Sensory analysis meltability correlated with both computer vision meltability (R-2 = 0.71, P < 0.001) and Olson and Price meltability (R-2 = 0.69, P < 0.001). There was a marked lack of correlation between the computer vision method and the Olson and Price test. This study showed that the Olson and Price test gave greater repeatability than the computer vision method. Results showed process cheese meltability decreased with increasing inorganic salt content and with lower moisture/fat ratios. There was very little evidence in this study to show that process cheese meltability changed between 2 and 4 weeks after manufacture..
Resumo:
The potential of a fibre optic sensor, detecting light backscatter in a cheese vat during coagulation and syneresis, to predict curd moisture, fat loses and curd yield was examined. Temperature, cutting time and calcium levels were varied to assess the strength of the predictions over a range of processing conditions. Equations were developed using a combination of independent variables, milk compositional and light backscatter parameters. Fat losses, curd yield and curd moisture content were predicted with a standard error of prediction (SEP) of +/- 2.65 g 100 g(-1) (R-2 = 0.93), +/- 0.95% (R-2 = 0.90) and +/- 1.43% (R-2 = 0.94), respectively. These results were used to develop a model for predicting curd moisture as a function of time during syneresis (SEP = +/- 1.72%; R-2 = 0.95). By monitoring coagulation and syneresis, this sensor technology could be employed to control curd moisture content, thereby improving process control during cheese manufacture. (c) 2007 Elsevier Ltd. All rights reserved..
Resumo:
An NIR reflectance sensor, with a large field of view and a fibre-optic connection to a spectrometer for measuring light backscatter at 980 nm, was used to monitor the syneresis process online during cheese-making with the goal of predicting syneresis indices (curd moisture content, yield of whey and fat losses to whey) over a range of curd cutting programmes and stirring speeds. A series of trials were carried out in an 11 L cheese vat using recombined whole milk. A factorial experimental design consisting of three curd stirring speeds and three cutting programmes, was undertaken. Milk was coagulated under constant conditions and the casein gel was cut when the elastic modulus reached 35 Pa. Among the syneresis indices investigated, the most accurate and most parsimonious multivariate model developed was for predicting yield of whey involving three terms, namely light backscatter, milk fat content and cutting intensity (R2 = 0.83, SEy = 6.13 g/100 g), while the best simple model also predicted this syneresis index using the light backscatter alone (R2 = 0.80, SEy = 6.53 g/100 g). In this model the main predictor was the light backscatter response from the NIR light back scatter sensor. The sensor also predicted curd moisture with a similar accuracy.
Resumo:
This paper investigates how the correlations implied by a first-order simultaneous autoregressive (SAR(1)) process are affected by the weights matrix and the autocorrelation parameter. A graph theoretic representation of the covariances in terms of walks connecting the spatial units helps to clarify a number of correlation properties of the processes. In particular, we study some implications of row-standardizing the weights matrix, the dependence of the correlations on graph distance, and the behavior of the correlations at the extremes of the parameter space. Throughout the analysis differences between directed and undirected networks are emphasized. The graph theoretic representation also clarifies why it is difficult to relate properties ofW to correlation properties of SAR(1) models defined on irregular lattices.
Resumo:
We examined the relationship between blood antioxidant enzyme activities, indices of inflammatory status and a number of lifestyle factors in the Caerphilly prospective cohort study of ischaemic heart disease. The study began in 1979 and is based on a representative male population sample. Initially 2512 men were seen in phase I, and followed-up every 5 years in phases II and III; they have recently been seen in phase IV. Data on social class, smoking habit, alcohol consumption were obtained by questionnaire, and body mass index was measured. Antioxidant enzyme activities and indices of inflammatory status were estimated by standard techniques. Significant associations were observed for: age with α-1-antichymotrypsin (p<0.0001) and with caeruloplasmin, both protein and oxidase (p<0.0001); smoking habit with α-1-antichymotrypsin (p<0.0001), with caeruloplasmin, both protein and oxidase (p<0.0001) and with glutathione peroxidose (GPX) (p<0.0001); social class with α-1-antichymotrypsin (p<0.0001), with caeruloplasmin both protein (p<0.001) and oxidase (p<0.01) and with GPX (p<0.0001); body mass index with α-1-antichymotrypsin (p<0.0001) and with caeruloplasmin protein (p<0.001). There was no significant association between alcohol consumption and any of the blood enzymes measured. Factor analysis produced a three-factor model (explaining 65.9% of the variation in the data set) which appeared to indicate close inter-relationships among antioxidants.
Resumo:
Novel imaging techniques are playing an increasingly important role in drug development, providing insight into the mechanism of action of new chemical entities. The data sets obtained by these methods can be large with complex inter-relationships, but the most appropriate statistical analysis for handling this data is often uncertain - precisely because of the exploratory nature of the way the data are collected. We present an example from a clinical trial using magnetic resonance imaging to assess changes in atherosclerotic plaques following treatment with a tool compound with established clinical benefit. We compared two specific approaches to handle the correlations due to physical location and repeated measurements: two-level and four-level multilevel models. The two methods identified similar structural variables, but higher level multilevel models had the advantage of explaining a greater proportion of variation, and the modeling assumptions appeared to be better satisfied.
Resumo:
The Asian monsoon system, including the western North Pacific (WNP), East Asian, and Indian monsoons, dominates the climate of the Asia-Indian Ocean-Pacific region, and plays a significant role in the global hydrological and energy cycles. The prediction of monsoons and associated climate features is a major challenge in seasonal time scale climate forecast. In this study, a comprehensive assessment of the interannual predictability of the WNP summer climate has been performed using the 1-month lead retrospective forecasts (hindcasts) of five state-of-the-art coupled models from ENSEMBLES for the period of 1960–2005. Spatial distribution of the temporal correlation coefficients shows that the interannual variation of precipitation is well predicted around the Maritime Continent and east of the Philippines. The high skills for the lower-tropospheric circulation and sea surface temperature (SST) spread over almost the whole WNP. These results indicate that the models in general successfully predict the interannual variation of the WNP summer climate. Two typical indices, the WNP summer precipitation index and the WNP lower-tropospheric circulation index (WNPMI), have been used to quantify the forecast skill. The correlation coefficient between five models’ multi-model ensemble (MME) mean prediction and observations for the WNP summer precipitation index reaches 0.66 during 1979–2005 while it is 0.68 for the WNPMI during 1960–2005. The WNPMI-regressed anomalies of lower-tropospheric winds, SSTs and precipitation are similar between observations and MME. Further analysis suggests that prediction reliability of the WNP summer climate mainly arises from the atmosphere–ocean interaction over the tropical Indian and the tropical Pacific Ocean, implying that continuing improvement in the representation of the air–sea interaction over these regions in CGCMs is a key for long-lead seasonal forecast over the WNP and East Asia. On the other hand, the prediction of the WNP summer climate anomalies exhibits a remarkable spread resulted from uncertainty in initial conditions. The summer anomalies related to the prediction spread, including the lower-tropospheric circulation, SST and precipitation anomalies, show a Pacific-Japan or East Asia-Pacific pattern in the meridional direction over the WNP. Our further investigations suggest that the WNPMI prediction spread arises mainly from the internal dynamics in air–sea interaction over the WNP and Indian Ocean, since the local relationships among the anomalous SST, circulation, and precipitation associated with the spread are similar to those associated with the interannual variation of the WNPMI in both observations and MME. However, the magnitudes of these anomalies related to the spread are weaker, ranging from one third to a half of those anomalies associated with the interannual variation of the WNPMI in MME over the tropical Indian Ocean and subtropical WNP. These results further support that the improvement in the representation of the air–sea interaction over the tropical Indian Ocean and subtropical WNP in CGCMs is a key for reducing the prediction spread and for improving the long-lead seasonal forecast over the WNP and East Asia.
Resumo:
The success of any diversification strategy depends upon the quality of the estimated correlation between assets. It is well known, however, that there is a tendency for the average correlation among assets to increase when the market falls and vice-versa. Thus, assuming that the correlation between assets is a constant over time seems unrealistic. Nonetheless, these changes in the correlation structure as a consequence of changes in the market’s return suggests that correlation shifts can be modelled as a function of the market return. This is the idea behind the model of Spurgin et al (2000), which models the beta or systematic risk, of the asset as a function of the returns in the market. This is an approach that offers particular attractions to fund managers as it suggest ways by which they can adjust their portfolios to benefit from changes in overall market conditions. In this paper the Spurgin et al (2000) model is applied to 31 real estate market segments in the UK using monthly data over the period 1987:1 to 2000:12. The results show that a number of market segments display significant negative correlation shifts, while others show significantly positive correlation shifts. Using this information fund managers can make strategic and tactical portfolio allocation decisions based on expectations of market volatility alone and so help them achieve greater portfolio performance overall and especially during different phases of the real estate cycle.
Resumo:
Practical applications of portfolio optimisation tend to proceed on a “top down” basis where funds are allocated first at asset class level (between, say, bonds, cash, equities and real estate) and then, progressively, at sub-class level (within property to sectors, office, retail, industrial for example). While there are organisational benefits from such an approach, it can potentially lead to sub-optimal allocations when compared to a “global” or “side-by-side” optimisation. This will occur where there are correlations between sub-classes across the asset divide that are masked in aggregation – between, for instance, City offices and the performance of financial services stocks. This paper explores such sub-class linkages using UK monthly stock and property data. Exploratory analysis using clustering procedures and factor analysis suggests that property performance and equity performance are distinctive: there is little persuasive evidence of contemporaneous or lagged sub-class linkages. Formal tests of the equivalence of optimised portfolios using top-down and global approaches failed to demonstrate significant differences, whether or not allocations were constrained. While the results may be a function of measurement of market returns, it is those returns that are used to assess fund performance. Accordingly, the treatment of real estate as a distinct asset class with diversification potential seems justified.
Resumo:
Atmospheric aerosol acts to both reduce the background concentration of natural cluster ions, and to attenuate optical propagation. Hence, the presence of aerosol has two consequences, the reduction of the air’s electrical conductivity and the visual range. Ion-aerosol theory and Koschmieder’s visibility theory are combined here to derive the related non-linear variation of the atmospheric electric potential gradient with visual range. A substantial sensitivity is found under poor visual range conditions, but, for good visual range conditions the sensitivity diminishes and little influence of local aerosol on the fair weather potential gradient occurs. This allows visual range measurements, made simply and routinely at many meteorological sites, to provide inference about the local air’s electrical properties.
Resumo:
Background. The anaerobic spirochaete Brachyspira pilosicoli causes enteric disease in avian, porcine and human hosts, amongst others. To date, the only available genome sequence of B. pilosicoli is that of strain 95/1000, a porcine isolate. In the first intra-species genome comparison within the Brachyspira genus, we report the whole genome sequence of B. pilosicoli B2904, an avian isolate, the incomplete genome sequence of B. pilosicoli WesB, a human isolate, and the comparisons with B. pilosicoli 95/1000. We also draw on incomplete genome sequences from three other Brachyspira species. Finally we report the first application of the high-throughput Biolog phenotype screening tool on the B. pilosicoli strains for detailed comparisons between genotype and phenotype. Results. Feature and sequence genome comparisons revealed a high degree of similarity between the three B. pilosicoli strains, although the genomes of B2904 and WesB were larger than that of 95/1000 (~2,765, 2.890 and 2.596 Mb, respectively). Genome rearrangements were observed which correlated largely with the positions of mobile genetic elements. Through comparison of the B2904 and WesB genomes with the 95/1000 genome, features that we propose are non-essential due to their absence from 95/1000 include a peptidase, glycine reductase complex components and transposases. Novel bacteriophages were detected in the newly-sequenced genomes, which appeared to have involvement in intra- and inter-species horizontal gene transfer. Phenotypic differences predicted from genome analysis, such as the lack of genes for glucuronate catabolism in 95/1000, were confirmed by phenotyping. Conclusions. The availability of multiple B. pilosicoli genome sequences has allowed us to demonstrate the substantial genomic variation that exists between these strains, and provides an insight into genetic events that are shaping the species. In addition, phenotype screening allowed determination of how genotypic differences translated to phenotype. Further application of such comparisons will improve understanding of the metabolic capabilities of Brachyspira species.
Resumo:
The nature of private commercial real estate markets presents difficulties for monitoring market performance. Assets are heterogeneous and spatially dispersed, trading is infrequent and there is no central marketplace in which prices and cash flows of properties can be easily observed. Appraisal based indices represent one response to these issues. However, these have been criticised on a number of grounds: that they may understate volatility, lag turning points and be affected by client influence issues. Thus, this paper reports econometrically derived transaction based indices of the UK commercial real estate market using Investment Property Databank (IPD) data, comparing them with published appraisal based indices. The method is similar to that presented by Fisher, Geltner, and Pollakowski (2007) and used by Massachusett, Institute of Technology (MIT) on National Council of Real Estate Investment Fiduciaries (NCREIF) data, although it employs value rather than equal weighting. The results show stronger growth from the transaction based indices in the run up to the peak in the UK market in 2007. They also show that returns from these series are more volatile and less autocorrelated than their appraisal based counterparts, but, surprisingly, differences in turning points were not found. The conclusion then debates the applications and limitations these series have as measures of market performance.
Resumo:
We present the first climate prediction of the coming decade made with multiple models, initialized with prior observations. This prediction accrues from an international activity to exchange decadal predictions in near real-time, in order to assess differences and similarities, provide a consensus view to prevent over-confidence in forecasts from any single model, and establish current collective capability. We stress that the forecast is experimental, since the skill of the multi-model system is as yet unknown. Nevertheless, the forecast systems used here are based on models that have undergone rigorous evaluation and individually have been evaluated for forecast skill. Moreover, it is important to publish forecasts to enable open evaluation, and to provide a focus on climate change in the coming decade. Initialized forecasts of the year 2011 agree well with observations, with a pattern correlation of 0.62 compared to 0.31 for uninitialized projections. In particular, the forecast correctly predicted La Niña in the Pacific, and warm conditions in the north Atlantic and USA. A similar pattern is predicted for 2012 but with a weaker La Niña. Indices of Atlantic multi-decadal variability and Pacific decadal variability show no signal beyond climatology after 2015, while temperature in the Niño3 region is predicted to warm slightly by about 0.5 °C over the coming decade. However, uncertainties are large for individual years and initialization has little impact beyond the first 4 years in most regions. Relative to uninitialized forecasts, initialized forecasts are significantly warmer in the north Atlantic sub-polar gyre and cooler in the north Pacific throughout the decade. They are also significantly cooler in the global average and over most land and ocean regions out to several years ahead. However, in the absence of volcanic eruptions, global temperature is predicted to continue to rise, with each year from 2013 onwards having a 50 % chance of exceeding the current observed record. Verification of these forecasts will provide an important opportunity to test the performance of models and our understanding and knowledge of the drivers of climate change.
Resumo:
The Fourier series can be used to describe periodic phenomena such as the one-dimensional crystal wave function. By the trigonometric treatements in Hückel theory it is shown that Hückel theory is a special case of Fourier series theory. Thus, the conjugated π system is in fact a periodic system. Therefore, it can be explained why such a simple theorem as Hückel theory can be so powerful in organic chemistry. Although it only considers the immediate neighboring interactions, it implicitly takes account of the periodicity in the complete picture where all the interactions are considered. Furthermore, the success of the trigonometric methods in Hückel theory is not accidental, as it based on the fact that Hückel theory is a specific example of the more general method of Fourier series expansion. It is also important for education purposes to expand a specific approach such as Hückel theory into a more general method such as Fourier series expansion.