898 resultados para modelling the robot
Resumo:
Deposition of insoluble prion protein (PrP) in the brain in the form of protein aggregates or deposits is characteristic of the ‘transmissible spongiform encephalopathies’ (TSEs). Understanding the growth and development of these PrP aggregates is important both in attempting to the elucidate of the pathogenesis of prion disease and in the development of treatments designed to prevent or inhibit the spread of prion pathology within the brain. Aggregation and disaggregation of proteins and the diffusion of substances into the developing aggregates (surface diffusion) are important factors in the development of protein aggregates. Mathematical models suggest that if aggregation/disaggregation or surface diffusion is the predominant factor, the size frequency distribution of the resulting protein aggregates in the brain should be described by either a power-law or a log-normal model respectively. This study tested this hypothesis for two different types of PrP deposit, viz., the diffuse and florid-type PrP deposits in patients with variant Creutzfeldt-Jakob disease (vCJD). The size distributions of the florid and diffuse plaques were fitted by a power-law function in 100% and 42% of brain areas studied respectively. By contrast, the size distributions of both types of plaque deviated significantly from a log-normal model in all brain areas. Hence, protein aggregation and disaggregation may be the predominant factor in the development of the florid plaques. A more complex combination of factors appears to be involved in the pathogenesis of the diffuse plaques. These results may be useful in the design of treatments to inhibit the development of protein aggregates in vCJD.
Resumo:
A formalism for modelling the dynamics of Genetic Algorithms (GAs) using methods from statistical mechanics, originally due to Prugel-Bennett and Shapiro, is reviewed, generalized and improved upon. This formalism can be used to predict the averaged trajectory of macroscopic statistics describing the GA's population. These macroscopics are chosen to average well between runs, so that fluctuations from mean behaviour can often be neglected. Where necessary, non-trivial terms are determined by assuming maximum entropy with constraints on known macroscopics. Problems of realistic size are described in compact form and finite population effects are included, often proving to be of fundamental importance. The macroscopics used here are cumulants of an appropriate quantity within the population and the mean correlation (Hamming distance) within the population. Including the correlation as an explicit macroscopic provides a significant improvement over the original formulation. The formalism is applied to a number of simple optimization problems in order to determine its predictive power and to gain insight into GA dynamics. Problems which are most amenable to analysis come from the class where alleles within the genotype contribute additively to the phenotype. This class can be treated with some generality, including problems with inhomogeneous contributions from each site, non-linear or noisy fitness measures, simple diploid representations and temporally varying fitness. The results can also be applied to a simple learning problem, generalization in a binary perceptron, and a limit is identified for which the optimal training batch size can be determined for this problem. The theory is compared to averaged results from a real GA in each case, showing excellent agreement if the maximum entropy principle holds. Some situations where this approximation brakes down are identified. In order to fully test the formalism, an attempt is made on the strong sc np-hard problem of storing random patterns in a binary perceptron. Here, the relationship between the genotype and phenotype (training error) is strongly non-linear. Mutation is modelled under the assumption that perceptron configurations are typical of perceptrons with a given training error. Unfortunately, this assumption does not provide a good approximation in general. It is conjectured that perceptron configurations would have to be constrained by other statistics in order to accurately model mutation for this problem. Issues arising from this study are discussed in conclusion and some possible areas of further research are outlined.
Resumo:
The major aim of this research is benchmarking top Arab banks using Data Envelopment Analysis (DEA) technique and to compare the results with that of published recently in Mostafa (2007a,b) [Mostafa, M. M. (2007a). Modeling the efficiency of top Arab banks: A DEA–neural network approach. Expert Systems with Applications, doi:10.1016/j.eswa.2007.09.001; Mostafa M. M. (2007b), Benchmarking top Arab banks’ efficiency through efficient frontier analysis, Industrial Management & Data Systems, 107(6) 802–823]. Data for 85 Arab banks used to conduct the analysis of relative efficiency. Our findings indicate that (1) the efficiency of Arab banks reported in Mostafa (2007a,b) is incorrect, hence, readers should take extra caution of using such results, (2) the corrected efficiency scores suggest that there is potential for significant improvements in Arab banks. In summary, this study overcomes with some data and methodology issues in measuring efficiency of Arab banks and highlights the importance of encouraging increased efficiency throughout the banking industry in the Arab world using the new results.
Resumo:
Supply chains are advocated widely as being the new units for commercial competition and developments have made the sharing of supply chain wide information increasingly common. Most organisations however still make operational decisions intended to maximise local organisational performance. With improved information sharing a holistic focus for operational decisions should now be possible. The development of a pan supply chain performance framework requires an examination of the conditions under which holistic-decisions provide benefits to either the individual enterprise or the complete supply chain. This paper presents the background and supporting methodology for a study of the impact of an overall supply chain performance metric framework upon local logistics decisions and the conditions under which such a framework would improve overall supply chain performance. The methodology concludes a simulation approach using a functionally extended Gensym's e-SCOR model, together with case based triangulation, to be optimum. Copyright © 2007 Inderscience Enterprises Ltd.
Resumo:
Presentation of the progress made in modelling fibre agglomerate transport in the racetrack channel. Fibre agglomerates can be generated through the disruption of insulation materials during LOCA in NPPs. The fibres can make their way to the containment sump strainers and lead to their blockage. This blockage can lead to an increase in the pressure drop acting across the strainers, which can lead to cavitation behind the strainer and in the recirculation pumps. This will lead to a loss of ECC water reaching the reactor. A small proportion of the fibres may also reach the reactor vessel. Therefore reliable numerical models of the three-dimensional flow behaviour of the fibres must be developed. The racetrack channel offers the chance to validate such models. The presentation describes the techniques involved and the results obtained from transient simulations of the whole channel.