32 resultados para computational models

em CentAUR: Central Archive University of Reading - UK


Relevância:

70.00% 70.00%

Publicador:

Resumo:

The hierarchical and "bob" (or branch-on-branch) models are tube-based computational models recently developed for predicting the linear rheology of general mixtures of polydisperse branched polymers. These two models are based on a similar tube-theory framework but differ in their numerical implementation and details of relaxation mechanisms. We present a detailed overview of the similarities and differences of these models and examine the effects of these differences on the predictions of the linear viscoelastic properties of a set of representative branched polymer samples in order to give a general picture of the performance of these models. Our analysis confirms that the hierarchical and bob models quantitatively predict the linear rheology of a wide range of branched polymer melts but also indicate that there is still no unique solution to cover all types of branched polymers without case-by-case adjustment of parameters such as the dilution exponent alpha and the factor p(2) which defines the hopping distance of a branch point relative to the tube diameter. An updated version of the hierarchical model, which shows improved computational efficiency and refined relaxation mechanisms, is introduced and used in these analyses.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

During fatigue tests of cortical bone specimens, at the unload portion of the cycle (zero stress) non-zero strains occur and progressively accumulate as the test progresses. This non-zero strain is hypothesised to be mostly, if not entirely, describable as creep. This work examines the rate of accumulation of this strain and quantifies its stress dependency. A published relationship determined from creep tests of cortical bone (Journal of Biomechanics 21 (1988) 623) is combined with knowledge of the stress history during fatigue testing to derive an expression for the amount of creep strain in fatigue tests. Fatigue tests on 31 bone samples from four individuals showed strong correlations between creep strain rate and both stress and “normalised stress” (σ/E) during tensile fatigue testing (0–T). Combined results were good (r2=0.78) and differences between the various individuals, in particular, vanished when effects were examined against normalised stress values. Constants of the regression showed equivalence to constants derived in creep tests. The universality of the results, with respect to four different individuals of both sexes, shows great promise for use in computational models of fatigue in bone structures.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

One of the most pervading concepts underlying computational models of information processing in the brain is linear input integration of rate coded uni-variate information by neurons. After a suitable learning process this results in neuronal structures that statically represent knowledge as a vector of real valued synaptic weights. Although this general framework has contributed to the many successes of connectionism, in this paper we argue that for all but the most basic of cognitive processes, a more complex, multi-variate dynamic neural coding mechanism is required - knowledge should not be spacially bound to a particular neuron or group of neurons. We conclude the paper with discussion of a simple experiment that illustrates dynamic knowledge representation in a spiking neuron connectionist system.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Terahertz (THz) frequency radiation, 0.1 THz to 20 THz, is being investigated for biomedical imaging applications following the introduction of pulsed THz sources that produce picosecond pulses and function at room temperature. Owing to the broadband nature of the radiation, spectral and temporal information is available from radiation that has interacted with a sample; this information is exploited in the development of biomedical imaging tools and sensors. In this work, models to aid interpretation of broadband THz spectra were developed and evaluated. THz radiation lies on the boundary between regions best considered using a deterministic electromagnetic approach and those better analysed using a stochastic approach incorporating quantum mechanical effects, so two computational models to simulate the propagation of THz radiation in an absorbing medium were compared. The first was a thin film analysis and the second a stochastic Monte Carlo model. The Cole–Cole model was used to predict the variation with frequency of the physical properties of the sample and scattering was neglected. The two models were compared with measurements from a highly absorbing water-based phantom. The Monte Carlo model gave a prediction closer to experiment over 0.1 to 3 THz. Knowledge of the frequency-dependent physical properties, including the scattering characteristics, of the absorbing media is necessary. The thin film model is computationally simple to implement but is restricted by the geometry of the sample it can describe. The Monte Carlo framework, despite being initially more complex, provides greater flexibility to investigate more complicated sample geometries.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The different triplet sequences in high molecular weight aromatic copolyimides comprising pyromellitimide units ("I") flanked by either ether-ketone ("K") or ether-sulfone residues ("S") show different binding strengths for pyrene-based tweezer-molecules. Such molecules bind primarily to the diimide unit through complementary π-π-stacking and hydrogen bonding. However, as shown by the magnitudes of 1H NMR complexation shifts and tweezer-polymer binding constants, the triplet "SIS" binds tweezer-molecules more strongly than "KIS" which in turn bind such molecules more strongly than "KIK". Computational models for tweezer-polymer binding, together with single-crystal X-ray analyses of tweezer-complexes with macrocyclic ether-imides, reveal that the variations in binding strength between the different triplet sequences arise from the different conformational preferences of aromatic rings at diarylketone and diarylsulfone linkages. These preferences determine whether or not chain-folding and secondary π−π-stacking occurs between the arms of the tweezermolecule and the 4,4'-biphenylene units which flank the central diimide residue.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Volume determination of tephra deposits is necessary for the assessment of the dynamics and hazards of explosive volcanoes. Several methods have been proposed during the past 40 years that include the analysis of crystal concentration of large pumices, integrations of various thinning relationships, and the inversion of field observations using analytical and computational models. Regardless of their strong dependence on tephra-deposit exposure and distribution of isomass/isopach contours, empirical integrations of deposit thinning trends still represent the most widely adopted strategy due to their practical and fast application. The most recent methods involve the best fitting of thinning data using various exponential seg- ments or a power-law curve on semilog plots of thickness (or mass/area) versus square root of isopach area. The exponential method is mainly sensitive to the number and the choice of straight segments, whereas the power-law method can better reproduce the natural thinning of tephra deposits but is strongly sensitive to the proximal or distal extreme of integration. We analyze a large data set of tephra deposits and propose a new empirical method for the deter- mination of tephra-deposit volumes that is based on the integration of the Weibull function. The new method shows a better agreement with observed data, reconciling the debate on the use of the exponential versus power-law method. In fact, the Weibull best fitting only depends on three free parameters, can well reproduce the gradual thinning of tephra deposits, and does not depend on the choice of arbitrary segments or of arbitrary extremes of integration.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Classical regression methods take vectors as covariates and estimate the corresponding vectors of regression parameters. When addressing regression problems on covariates of more complex form such as multi-dimensional arrays (i.e. tensors), traditional computational models can be severely compromised by ultrahigh dimensionality as well as complex structure. By exploiting the special structure of tensor covariates, the tensor regression model provides a promising solution to reduce the model’s dimensionality to a manageable level, thus leading to efficient estimation. Most of the existing tensor-based methods independently estimate each individual regression problem based on tensor decomposition which allows the simultaneous projections of an input tensor to more than one direction along each mode. As a matter of fact, multi-dimensional data are collected under the same or very similar conditions, so that data share some common latent components but can also have their own independent parameters for each regression task. Therefore, it is beneficial to analyse regression parameters among all the regressions in a linked way. In this paper, we propose a tensor regression model based on Tucker Decomposition, which identifies not only the common components of parameters across all the regression tasks, but also independent factors contributing to each particular regression task simultaneously. Under this paradigm, the number of independent parameters along each mode is constrained by a sparsity-preserving regulariser. Linked multiway parameter analysis and sparsity modeling further reduce the total number of parameters, with lower memory cost than their tensor-based counterparts. The effectiveness of the new method is demonstrated on real data sets.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Extratropical transition (ET) has eluded objective identification since the realisation of its existence in the 1970s. Recent advances in numerical, computational models have provided data of higher resolution than previously available. In conjunction with this, an objective characterisation of the structure of a storm has now become widely accepted in the literature. Here we present a method of combining these two advances to provide an objective method for defining ET. The approach involves applying K-means clustering to isolate different life-cycle stages of cyclones and then analysing the progression through these stages. This methodology is then tested by applying it to five recent years from the European Centre of Medium-Range Weather Forecasting operational analyses. It is found that this method is able to determine the general characteristics for ET in the Northern Hemisphere. Between 2008 and 2012, 54% (±7, 32 of 59) of Northern Hemisphere tropical storms are estimated to undergo ET. There is great variability across basins and time of year. To fully capture all the instances of ET is necessary to introduce and characterise multiple pathways through transition. Only one of the three transition types needed has been previously well-studied. A brief description of the alternate types of transitions is given, along with illustrative storms, to assist with further study

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Novel 'tweezer-type' complexes that exploit the interactions between pi-electron-rich pyrenyl groups and pi-electron deficient diimide units have been designed and synthesised. The component molecules leading to complex formation were accessed readily from commercially available starting materials through short and efficient syntheses. Analysis of the resulting complexes, using the visible charge-transfer band, revealed association constants that increased sequentially from 130 to 11,000 M-1 as increasing numbers of pi-pi-stacking interactions were introduced into the systems. Computational modelling was used to analyse the structures of these complexes, revealing low-energy chain-folded conformations for both components, which readily allow close, multiple pi-pi-stacking and hydrogen bonding to be achieved. In this paper, we give details of our initial studies of these complexes and outline how their behaviour could provide a basis for designing self-healing polymer blends for use in adaptive coating systems. (C) 2008 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Although climate models have been improving in accuracy and efficiency over the past few decades, it now seems that these incremental improvements may be slowing. As tera/petascale computing becomes massively parallel, our legacy codes are less suitable, and even with the increased resolution that we are now beginning to use, these models cannot represent the multiscale nature of the climate system. This paper argues that it may be time to reconsider the use of adaptive mesh refinement for weather and climate forecasting in order to achieve good scaling and representation of the wide range of spatial scales in the atmosphere and ocean. Furthermore, the challenge of introducing living organisms and human responses into climate system models is only just beginning to be tackled. We do not yet have a clear framework in which to approach the problem, but it is likely to cover such a huge number of different scales and processes that radically different methods may have to be considered. The challenges of multiscale modelling and petascale computing provide an opportunity to consider a fresh approach to numerical modelling of the climate (or Earth) system, which takes advantage of the computational fluid dynamics developments in other fields and brings new perspectives on how to incorporate Earth system processes. This paper reviews some of the current issues in climate (and, by implication, Earth) system modelling, and asks the question whether a new generation of models is needed to tackle these problems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Applications such as neuroscience, telecommunication, online social networking, transport and retail trading give rise to connectivity patterns that change over time. In this work, we address the resulting need for network models and computational algorithms that deal with dynamic links. We introduce a new class of evolving range-dependent random graphs that gives a tractable framework for modelling and simulation. We develop a spectral algorithm for calibrating a set of edge ranges from a sequence of network snapshots and give a proof of principle illustration on some neuroscience data. We also show how the model can be used computationally and analytically to investigate the scenario where an evolutionary process, such as an epidemic, takes place on an evolving network. This allows us to study the cumulative effect of two distinct types of dynamics.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

MOTIVATION: The accurate prediction of the quality of 3D models is a key component of successful protein tertiary structure prediction methods. Currently, clustering or consensus based Model Quality Assessment Programs (MQAPs) are the most accurate methods for predicting 3D model quality; however they are often CPU intensive as they carry out multiple structural alignments in order to compare numerous models. In this study, we describe ModFOLDclustQ - a novel MQAP that compares 3D models of proteins without the need for CPU intensive structural alignments by utilising the Q measure for model comparisons. The ModFOLDclustQ method is benchmarked against the top established methods in terms of both accuracy and speed. In addition, the ModFOLDclustQ scores are combined with those from our older ModFOLDclust method to form a new method, ModFOLDclust2, that aims to provide increased prediction accuracy with negligible computational overhead. RESULTS: The ModFOLDclustQ method is competitive with leading clustering based MQAPs for the prediction of global model quality, yet it is up to 150 times faster than the previous version of the ModFOLDclust method at comparing models of small proteins (<60 residues) and over 5 times faster at comparing models of large proteins (>800 residues). Furthermore, a significant improvement in accuracy can be gained over the previous clustering based MQAPs by combining the scores from ModFOLDclustQ and ModFOLDclust to form the new ModFOLDclust2 method, with little impact on the overall time taken for each prediction. AVAILABILITY: The ModFOLDclustQ and ModFOLDclust2 methods are available to download from: http://www.reading.ac.uk/bioinf/downloads/ CONTACT: l.j.mcguffin@reading.ac.uk.

Relevância:

30.00% 30.00%

Publicador: