981 resultados para Log cabins.
Resumo:
The size frequency distributions of diffuse, primitive and classic β- amyloid (Aβ) deposits were studied in single sections of cortical tissue from patients with Alzheimer's disease (AD) and Down's syndrome (DS) and compared with those predicted by the log-normal model. In a sample of brain regions, these size distributions were compared with those obtained by serial reconstruction through the tissue and the data used to adjust the size distributions obtained in single sections. The adjusted size distributions of the diffuse, primitive and classic deposits deviated significantly from a log-normal model in AD and DS, the greatest deviations from the model being observed in AD. More Aβ deposits were observed close to the mean and fewer in the larger size classes than predicted by the model. Hence, the growth of Aβ deposits in AD and DS does not strictly follow the log-normal model, deposits growing to within a more restricted size range than predicted. However, Aβ deposits grow to a larger size in DS compared with AD which may reflect differences in the mechanism of Aβ formation.
Resumo:
The size frequency distributions of diffuse, primitive and cored senile plaques (SP) were studied in single sections of the temporal lobe from 10 patients with Alzheimer’s disease (AD). The size distribution curves were unimodal and positively skewed. The size distribution curve of the diffuse plaques was shifted towards larger plaques while those of the neuritic and cored plaques were shifted towards smaller plaques. The neuritic/diffuse plaque ratio was maximal in the 11 – 30 micron size class and the cored/ diffuse plaque ratio in the 21 – 30 micron size class. The size distribution curves of the three types of plaque deviated significantly from a log-normal distribution. Distributions expressed on a logarithmic scale were ‘leptokurtic’, i.e. with excess of observations near the mean. These results suggest that SP in AD grow to within a more restricted size range than predicted from a log-normal model. In addition, there appear to be differences in the patterns of growth of diffuse, primitive and cored plaques. If neuritic and cored plaques develop from earlier diffuse plaques, then smaller diffuse plaques are more likely to be converted to mature plaques.
Resumo:
Most empirical work in economic growth assumes either a Cobb–Douglas production function expressed in logs or a log-approximated constant elasticity of substitution specification. Estimates from each are likely biased due to logging the model and the latter can also suffer from approximation bias. We illustrate this with a successful replication of Masanjala and Papagerogiou (The Solow model with CES technology: nonlinearities and parameter heterogeneity, Journal of Applied Econometrics 2004; 19: 171–201) and then estimate both models in levels to avoid these biases. Our estimation in levels gives results in line with conventional wisdom.
Resumo:
Many papers claim that a Log Periodic Power Law (LPPL) model fitted to financial market bubbles that precede large market falls or 'crashes', contains parameters that are confined within certain ranges. Further, it is claimed that the underlying model is based on influence percolation and a martingale condition. This paper examines these claims and their validity for capturing large price falls in the Hang Seng stock market index over the period 1970 to 2008. The fitted LPPLs have parameter values within the ranges specified post hoc by Johansen and Sornette (2001) for only seven of these 11 crashes. Interestingly, the LPPL fit could have predicted the substantial fall in the Hang Seng index during the recent global downturn. Overall, the mechanism posited as underlying the LPPL model does not do so, and the data used to support the fit of the LPPL model to bubbles does so only partially. © 2013.
Resumo:
In many of the Statnotes described in this series, the statistical tests assume the data are a random sample from a normal distribution These Statnotes include most of the familiar statistical tests such as the ‘t’ test, analysis of variance (ANOVA), and Pearson’s correlation coefficient (‘r’). Nevertheless, many variables exhibit a more or less ‘skewed’ distribution. A skewed distribution is asymmetrical and the mean is displaced either to the left (positive skew) or to the right (negative skew). If the mean of the distribution is low, the degree of variation large, and when values can only be positive, a positively skewed distribution is usually the result. Many distributions have potentially a low mean and high variance including that of the abundance of bacterial species on plants, the latent period of an infectious disease, and the sensitivity of certain fungi to fungicides. These positively skewed distributions are often fitted successfully by a variant of the normal distribution called the log-normal distribution. This statnote describes fitting the log-normal distribution with reference to two scenarios: (1) the frequency distribution of bacterial numbers isolated from cloths in a domestic environment and (2), the sizes of lichenised ‘areolae’ growing on the hypothalus of Rhizocarpon geographicum (L.) DC.
Resumo:
Peptides are of great therapeutic potential as vaccines and drugs. Knowledge of physicochemical descriptors, including the partition coefficient logP, is useful for the development of predictive Quantitative Structure-Activity Relationships (QSARs). We have investigated the accuracy of available programs for the prediction of logP values for peptides with known experimental values obtained from the literature. Eight prediction programs were tested, of which seven programs were fragment-based methods: XLogP, LogKow, PLogP, ACDLogP, AlogP, Interactive Analysis's LogP and MlogP; and one program used a whole molecule approach: QikProp. The predictive accuracy of the programs was assessed using r(2) values, with ALogP being the most effective (r( 2) = 0.822) and MLogP the least (r(2) = 0.090). We also examined three distinct types of peptide structure: blocked, unblocked, and cyclic. For each study (all peptides, blocked, unblocked and cyclic peptides) the performance of programs rated from best to worse is as follows: all peptides - ALogP, QikProp, PLogP, XLogP, IALogP, LogKow, ACDLogP, and MlogP; blocked peptides - PLogP, XLogP, ACDLogP, IALogP, LogKow, QikProp, ALogP, and MLogP; unblocked peptides - QikProp, IALogP, ALogP, ACDLogP, MLogP, XLogP, LogKow and PLogP; cyclic peptides - LogKow, ALogP, XLogP, MLogP, QikProp, ACDLogP, IALogP. In summary, all programs gave better predictions for blocked peptides, while, in general, logP values for cyclic peptides were under-predicted and those of unblocked peptides were over-predicted.
Resumo:
The article presents a new type of logs merging tool for multiple blade telecommunication systems based on the development of a new approach. The introduction of the new logs merging tool (the Log Merger) can help engineers to build a processes behavior timeline with a flexible system of information structuring used to assess the changes in the analyzed system. This logs merging system based on the experts experience and their analytical skills generates a knowledge base which could be advantageous in further decision-making expert system development. This paper proposes and discusses the design and implementation of the Log Merger, its architecture, multi-board analysis of capability and application areas. The paper also presents possible ways of further tool improvement e.g. - to extend its functionality and cover additional system platforms. The possibility to add an analysis module for further expert system development is also considered.
Resumo:
MSC 2010: 30C45, 30C55
Resumo:
MSC 2010: 30C45, 30C55
Resumo:
This paper examines whether the observed long memory behavior of log-range series is to some extent spurious and whether it can be explained by the presence of structural breaks. Utilizing stock market data we show that the characterization of log-range series as long memory processes can be a strong assumption. Moreover, we find that all examined series experience a large number of significant breaks. Once the breaks are accounted for, the volatility persistence is eliminated. Overall, the findings suggest that volatility can be adequately represented, at least in-sample, through a multiple breaks process and a short run component.
Resumo:
This dissertation develops a new mathematical approach that overcomes the effect of a data processing phenomenon known as “histogram binning” inherent to flow cytometry data. A real-time procedure is introduced to prove the effectiveness and fast implementation of such an approach on real-world data. The histogram binning effect is a dilemma posed by two seemingly antagonistic developments: (1) flow cytometry data in its histogram form is extended in its dynamic range to improve its analysis and interpretation, and (2) the inevitable dynamic range extension introduces an unwelcome side effect, the binning effect, which skews the statistics of the data, undermining as a consequence the accuracy of the analysis and the eventual interpretation of the data. ^ Researchers in the field contended with such a dilemma for many years, resorting either to hardware approaches that are rather costly with inherent calibration and noise effects; or have developed software techniques based on filtering the binning effect but without successfully preserving the statistical content of the original data. ^ The mathematical approach introduced in this dissertation is so appealing that a patent application has been filed. The contribution of this dissertation is an incremental scientific innovation based on a mathematical framework that will allow researchers in the field of flow cytometry to improve the interpretation of data knowing that its statistical meaning has been faithfully preserved for its optimized analysis. Furthermore, with the same mathematical foundation, proof of the origin of such an inherent artifact is provided. ^ These results are unique in that new mathematical derivations are established to define and solve the critical problem of the binning effect faced at the experimental assessment level, providing a data platform that preserves its statistical content. ^ In addition, a novel method for accumulating the log-transformed data was developed. This new method uses the properties of the transformation of statistical distributions to accumulate the output histogram in a non-integer and multi-channel fashion. Although the mathematics of this new mapping technique seem intricate, the concise nature of the derivations allow for an implementation procedure that lends itself to a real-time implementation using lookup tables, a task that is also introduced in this dissertation. ^
Resumo:
Hydrophobicity as measured by Log P is an important molecular property related to toxicity and carcinogenicity. With increasing public health concerns for the effects of Disinfection By-Products (DBPs), there are considerable benefits in developing Quantitative Structure and Activity Relationship (QSAR) models capable of accurately predicting Log P. In this research, Log P values of 173 DBP compounds in 6 functional classes were used to develop QSAR models, by applying 3 molecular descriptors, namely, Energy of the Lowest Unoccupied Molecular Orbital (ELUMO), Number of Chlorine (NCl) and Number of Carbon (NC) by Multiple Linear Regression (MLR) analysis. The QSAR models developed were validated based on the Organization for Economic Co-operation and Development (OECD) principles. The model Applicability Domain (AD) and mechanistic interpretation were explored. Considering the very complex nature of DBPs, the established QSAR models performed very well with respect to goodness-of-fit, robustness and predictability. The predicted values of Log P of DBPs by the QSAR models were found to be significant with a correlation coefficient R2 from 81% to 98%. The Leverage Approach by Williams Plot was applied to detect and remove outliers, consequently increasing R 2 by approximately 2% to 13% for different DBP classes. The developed QSAR models were statistically validated for their predictive power by the Leave-One-Out (LOO) and Leave-Many-Out (LMO) cross validation methods. Finally, Monte Carlo simulation was used to assess the variations and inherent uncertainties in the QSAR models of Log P and determine the most influential parameters in connection with Log P prediction. The developed QSAR models in this dissertation will have a broad applicability domain because the research data set covered six out of eight common DBP classes, including halogenated alkane, halogenated alkene, halogenated aromatic, halogenated aldehyde, halogenated ketone, and halogenated carboxylic acid, which have been brought to the attention of regulatory agencies in recent years. Furthermore, the QSAR models are suitable to be used for prediction of similar DBP compounds within the same applicability domain. The selection and integration of various methodologies developed in this research may also benefit future research in similar fields.
Resumo:
This dissertation develops a new mathematical approach that overcomes the effect of a data processing phenomenon known as "histogram binning" inherent to flow cytometry data. A real-time procedure is introduced to prove the effectiveness and fast implementation of such an approach on real-world data. The histogram binning effect is a dilemma posed by two seemingly antagonistic developments: (1) flow cytometry data in its histogram form is extended in its dynamic range to improve its analysis and interpretation, and (2) the inevitable dynamic range extension introduces an unwelcome side effect, the binning effect, which skews the statistics of the data, undermining as a consequence the accuracy of the analysis and the eventual interpretation of the data. Researchers in the field contended with such a dilemma for many years, resorting either to hardware approaches that are rather costly with inherent calibration and noise effects; or have developed software techniques based on filtering the binning effect but without successfully preserving the statistical content of the original data. The mathematical approach introduced in this dissertation is so appealing that a patent application has been filed. The contribution of this dissertation is an incremental scientific innovation based on a mathematical framework that will allow researchers in the field of flow cytometry to improve the interpretation of data knowing that its statistical meaning has been faithfully preserved for its optimized analysis. Furthermore, with the same mathematical foundation, proof of the origin of such an inherent artifact is provided. These results are unique in that new mathematical derivations are established to define and solve the critical problem of the binning effect faced at the experimental assessment level, providing a data platform that preserves its statistical content. In addition, a novel method for accumulating the log-transformed data was developed. This new method uses the properties of the transformation of statistical distributions to accumulate the output histogram in a non-integer and multi-channel fashion. Although the mathematics of this new mapping technique seem intricate, the concise nature of the derivations allow for an implementation procedure that lends itself to a real-time implementation using lookup tables, a task that is also introduced in this dissertation.