918 resultados para maximum contrast analysis
Resumo:
The 11-yr solar cycle temperature response to spectrally resolved solar irradiance changes and associated ozone changes is calculated using a fixed dynamical heating (FDH) model. Imposed ozone changes are from satellite observations, in contrast to some earlier studies. A maximum of 1.6 K is found in the equatorial upper stratosphere and a secondary maximum of 0.4 K in the equatorial lower stratosphere, forming a double peak in the vertical. The upper maximum is primarily due to the irradiance changes while the lower maximum is due to the imposed ozone changes. The results compare well with analyses using the 40-yr ECMWF Re-Analysis (ERA-40) and NCEP/NCAR datasets. The equatorial lower stratospheric structure is reproduced even though, by definition, the FDH calculations exclude dynamically driven temperature changes, suggesting an important role for an indirect dynamical effect through ozone redistribution. The results also suggest that differences between the Stratospheric Sounding Unit (SSU)/Microwave Sounding Unit (MSU) and ERA-40 estimates of the solar cycle signal can be explained by the poor vertical resolution of the SSU/MSU measurements. The adjusted radiative forcing of climate change is also investigated. The forcing due to irradiance changes was 0.14 W m−2, which is only 78% of the value obtained by employing the standard method of simple scaling of the total solar irradiance (TSI) change. The difference arises because much of the change in TSI is at wavelengths where ozone absorbs strongly. The forcing due to the ozone change was only 0.004 W m−2 owing to strong compensation between negative shortwave and positive longwave forcings.
Resumo:
The structure and size of the eyes generated in numerically simulated tropical cyclones and polar lows have been studied. A primitive-equation numerical model simulated systems in which the structures of the eyes formed were consistent with available observations. Whilst the tropical cyclone eyes generated were usually rapidly rotating, it appeared impossible for an eye formed in a system with a polar environment to develop this type of structure. The polar low eyes were found to be unable to warm through the subsidence of air with high values of potential temperature, as the environment was approximately statically neutral. Factors affecting the size of the eye were investigated through a series of controlled experiments. In mature tropical cyclone systems the size of the eye was insensitive to small changes in initial conditions, surface friction and latent and sensible heating from the ocean. In contrast, the eye size was strongly dependent on these parameters in the mature polar lows. Consistent with the findings, a mechanism is proposed in which the size of the eye in simulated polar lows is controlled by the strength of subsidence within it.
Resumo:
Population size estimation with discrete or nonparametric mixture models is considered, and reliable ways of construction of the nonparametric mixture model estimator are reviewed and set into perspective. Construction of the maximum likelihood estimator of the mixing distribution is done for any number of components up to the global nonparametric maximum likelihood bound using the EM algorithm. In addition, the estimators of Chao and Zelterman are considered with some generalisations of Zelterman’s estimator. All computations are done with CAMCR, a special software developed for population size estimation with mixture models. Several examples and data sets are discussed and the estimators illustrated. Problems using the mixture model-based estimators are highlighted.
Resumo:
GP catalyzes the phosphorylation of glycogen to Glc-1-P. Because of its fundamental role in the metabolism of glycogen, GP has been the target for a systematic structure-assisted design of inhibitory compounds, which could be of value in the therapeutic treatment of type 2 diabetes mellitus. The most potent catalytic-site inhibitor of GP identified to date is spirohydantoin of glucopyranose (hydan). In this work, we employ MD free energy simulations to calculate the relative binding affinities for GP of hydan and two spirohydantoin analogues, methyl-hydan and n-hydan, in which a hydrogen atom is replaced by a methyl- or amino group, respectively. The results are compared with the experimental relative affinities of these ligands, estimated by kinetic measurements of the ligand inhibition constants. The calculated binding affinity for methyl-hydan (relative to hydan) is 3.75 +/- 1.4 kcal/mol, in excellent agreement with the experimental value (3.6 +/- 0.2 kcal/mol). For n-hydan, the calculated value is 1.0 +/- 1.1 kcal/mol, somewhat smaller than the experimental result (2.3 +/- 0.1 kcal/mol). A free energy decomposition analysis shows that hydan makes optimum interactions with protein residues and specific water molecules in the catalytic site. In the other two ligands, structural perturbations of the active site by the additional methyl- or amino group reduce the corresponding binding affinities. The computed binding free energies are sensitive to the preference of a specific water molecule for two well-defined positions in the catalytic site. The behavior of this water is analyzed in detail, and the free energy profile for the translocation of the water between the two positions is evaluated. The results provide insights into the role of water molecules in modulating ligand binding affinities. A comparison of the interactions between a set of ligands and their surrounding groups in X-ray structures is often used in the interpretation of binding free energy differences and in guiding the design of new ligands. For the systems in this work, such an approach fails to estimate the order of relative binding strengths, in contrast to the rigorous free energy treatment.
Resumo:
We have developed a new method for the analysis of voids in proteins (defined as empty cavities not accessible to solvent). This method combines analysis of individual discrete voids with analysis of packing quality. While these are different aspects of the same effect, they have traditionally been analysed using different approaches. The method has been applied to the calculation of total void volume and maximum void size in a non-redundant set of protein domains and has been used to examine correlations between thermal stability and void size. The tumour-suppressor protein p53 has then been compared with the non-redundant data set to determine whether its low thermal stability results from poor packing. We found that p53 has average packing, but the detrimental effects of some previously unexplained mutations to p53 observed in cancer can be explained by the creation of unusually large voids. (C) 2004 Elsevier Ltd. All rights reserved.
Resumo:
Objectives: To assess the potential source of variation that surgeon may add to patient outcome in a clinical trial of surgical procedures. Methods: Two large (n = 1380) parallel multicentre randomized surgical trials were undertaken to compare laparoscopically assisted hysterectomy with conventional methods of abdominal and vaginal hysterectomy; involving 43 surgeons. The primary end point of the trial was the occurrence of at least one major complication. Patients were nested within surgeons giving the data set a hierarchical structure. A total of 10% of patients had at least one major complication, that is, a sparse binary outcome variable. A linear mixed logistic regression model (with logit link function) was used to model the probability of a major complication, with surgeon fitted as a random effect. Models were fitted using the method of maximum likelihood in SAS((R)). Results: There were many convergence problems. These were resolved using a variety of approaches including; treating all effects as fixed for the initial model building; modelling the variance of a parameter on a logarithmic scale and centring of continuous covariates. The initial model building process indicated no significant 'type of operation' across surgeon interaction effect in either trial, the 'type of operation' term was highly significant in the abdominal trial, and the 'surgeon' term was not significant in either trial. Conclusions: The analysis did not find a surgeon effect but it is difficult to conclude that there was not a difference between surgeons. The statistical test may have lacked sufficient power, the variance estimates were small with large standard errors, indicating that the precision of the variance estimates may be questionable.
Resumo:
Growth patterns and cropping were evaluated over the season for the everbearing strawberry 'Everest' at a range of temperatures (15-27degreesC) in two light environments (ambient and 50% shade). The highest yield was recorded for unshaded plants grown at 23degreesC, but the optimum temperature for vegetative growth was 15degreesC. With increasing temperature fruit number increased, but fruit weight decreased. Fruit weight was also significantly reduced by shade, and although 'Everest' showed a degree of shade tolerance in vegetative growth, yield was consistently reduced by shade. Shade also reduced the number of crowns developed by the plants over the course of the season, emphasising that crown number was ultimately the limiting factor for yield potential. We conclude that, in contrast to Junebearers which partition more assimilates to fruit at temperatures around 15degreesC (Le Miere et al., 1998), optimised cropping in the everbearer 'Everest' is achieved at the significantly higher temperature of 23degreesC. These findings have significance for commercial production, in which protection tends to reduce light levels but increase average temperature throughout the season.
Resumo:
Thermal non-destructive testing (NDT) is commonly used for assessing aircraft structures. This research work evaluates the potential of pulsed -- transient thermography for locating fixtures beneath aircraft skins in order to facilitate accurate automated assembly operations. Representative aluminium and carbon fibre aircraft skin-fixture assemblies were modelled using thermal modelling software. The assemblies were also experimentally investigated with an integrated pulsed thermographic evaluation system, as well as using a custom built system incorporating a miniature un-cooled camera. Modelling showed that the presence of an air gap between skin and fixture significantly reduced the thermal contrast developed, especially in aluminium. Experimental results show that fixtures can be located to accuracies of 0.5 mm.
Resumo:
Phenolic compounds in wastewaters are difficult to treat using the conventional biological techniques such as activated sludge processes because of their bio-toxic and recalcitrant properties and the high volumes released from various chemical, pharmaceutical and other industries. In the current work, a modified heterogeneous advanced Fenton process (AFP) is presented as a novel methodology for the treatment of phenolic wastewater. The modified AFP, which is a combination of hydrodynamic cavitation generated using a liquid whistle reactor and the AFP is a promising technology for wastewaters containing high organic content. The presence of hydrodynamic cavitation in the treatment scheme intensifies the Fenton process by generation of additional free radicals. Also, the turbulence produced during the hydrodynamic cavitation process increases the mass transfer rates as well as providing better contact between the pseudo-catalyst surfaces and the reactants. A multivariate design of experiments has been used to ascertain the influence of hydrogen peroxide dosage and iron catalyst loadings on the oxidation performance of the modified AFP. High er TOC removal rates were achieved with increased concentrations of hydrogen peroxide. In contrast, the effect of catalyst loadings was less important on the TOC removal rate under conditions used in this work although there is an optimum value of this parameter. The concentration of iron species in the reaction solution was measured at 105 min and its relationship with the catalyst loadings and hydrogen peroxide level is presented.
Resumo:
A series of government initiatives has raised both the profile of ICT in the curriculum and the expectation that high quality teaching and learning resources will be accessible across electronic networks. In order for e-learning resources such as websites to have the maximum educational impact, teachers need to be involved in their design and development. Use-case analysis provides a means of defining user requirements and other constraints in such a way that software developers can produce e-learning resources which reflect teachers' professional knowledge and support their classroom practice. It has some features in common with the participatory action research used to develop other aspects of classroom practice. Two case-studies are presented: one involves the development of an on-line resource centred on transcripts of original historical documents; the other describes how 'Learning how to Learn', a major, distributed research project funded under the ESRC Teaching and Learning Research Programme is using use-case analysis to develop web resources and services.
Resumo:
A method is presented for determining the time to first division of individual bacterial cells growing on agar media. Bacteria were inoculated onto agar-coated slides and viewed by phase-contrast microscopy. Digital images of the growing bacteria were captured at intervals and the time to first division estimated by calculating the "box area ratio". This is the area of the smallest rectangle that can be drawn around an object, divided by the area of the object itself. The box area ratios of cells were found to increase suddenly during growth at a time that correlated with cell division as estimated by visual inspection of the digital images. This was caused by a change in the orientation of the two daughter cells that occurred when sufficient flexibility arose at their point of attachment. This method was used successfully to generate lag time distributions for populations of Escherichia coli, Listeria monocytogenes and Pseudomonas aeruginosa, but did not work with the coccoid organism Staphylococcus aureus. This method provides an objective measure of the time to first cell division, whilst automation of the data processing allows a large number of cells to be examined per experiment. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
'Maximum Available Feedback' is Bode's term for the highest possible loop gain over a given bandwidth, with specified stability margins, in a single loop feedback system. His work using asymptotic analysis allowed Bode to develop a methodology for achieving this. However, the actual system performance differs from that specified, due to the use of asymptotic approximations, and the author[2] has described how, for instance, the actual phase margin is often much lower than required when the bandwidth is high, and proposed novel modifications to the asymptotes to address the issue. This paper gives some new analysis of such systems, showing that the method also contravenes Bode's definition of phase margin, and shows how the author's modifications can be used for different amounts of bandwidth.
Resumo:
Robot-mediated therapies offer a new approach to neurorehabilitation. This paper analyses the Fugl-Meyer data from the Gentle/S project and finds that the two intervention phases (sling suspension and robot mediated therapy) have approximately equal value to the further recovery of chronic stroke subjects (on average 27 months post stroke). Both sling suspension and robot mediated interventions show a recovery over baseline and further work is needed to establish the common factors in treatment, and to establish intervention protocols for each that will give individual subjects a maximum level of recovery.
Resumo:
We introduce transreal analysis as a generalisation of real analysis. We find that the generalisation of the real exponential and logarithmic functions is well defined for all transreal numbers. Hence, we derive well defined values of all transreal powers of all non-negative transreal numbers. In particular, we find a well defined value for zero to the power of zero. We also note that the computation of products via the transreal logarithm is identical to the transreal product, as expected. We then generalise all of the common, real, trigonometric functions to transreal functions and show that transreal (sin x)/x is well defined everywhere. This raises the possibility that transreal analysis is total, in other words, that every function and every limit is everywhere well defined. If so, transreal analysis should be an adequate mathematical basis for analysing the perspex machine - a theoretical, super-Turing machine that operates on a total geometry. We go on to dispel all of the standard counter "proofs" that purport to show that division by zero is impossible. This is done simply by carrying the proof through in transreal arithmetic or transreal analysis. We find that either the supposed counter proof has no content or else that it supports the contention that division by zero is possible. The supposed counter proofs rely on extending the standard systems in arbitrary and inconsistent ways and then showing, tautologously, that the chosen extensions are not consistent. This shows only that the chosen extensions are inconsistent and does not bear on the question of whether division by zero is logically possible. By contrast, transreal arithmetic is total and consistent so it defeats any possible "straw man" argument. Finally, we show how to arrange that a function has finite or else unmeasurable (nullity) values, but no infinite values. This arithmetical arrangement might prove useful in mathematical physics because it outlaws naked singularities in all equations.
Resumo:
Abstract. Different types of mental activity are utilised as an input in Brain-Computer Interface (BCI) systems. One such activity type is based on Event-Related Potentials (ERPs). The characteristics of ERPs are not visible in single-trials, thus averaging over a number of trials is necessary before the signals become usable. An improvement in ERP-based BCI operation and system usability could be obtained if the use of single-trial ERP data was possible. The method of Independent Component Analysis (ICA) can be utilised to separate single-trial recordings of ERP data into components that correspond to ERP characteristics, background electroencephalogram (EEG) activity and other components with non- cerebral origin. Choice of specific components and their use to reconstruct “denoised” single-trial data could improve the signal quality, thus allowing the successful use of single-trial data without the need for averaging. This paper assesses single-trial ERP signals reconstructed using a selection of estimated components from the application of ICA on the raw ERP data. Signal improvement is measured using Contrast-To-Noise measures. It was found that such analysis improves the signal quality in all single-trials.