23 resultados para Stopping.

em Aston University Research Archive


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The effectiveness of rapid and controlled heating of intact tissue to inactivate native enzymatic activity and prevent proteome degradation has been evaluated. Mouse brains were bisected immediately following excision, with one hemisphere being heat treated followed by snap freezing in liquid nitrogen while the other hemisphere was snap frozen immediately. Sections were cut by cryostatic microtome and analyzed by MALDI-MS imaging and minimal label 2-D DIGE, to monitor time-dependent relative changes in intensities of protein and peptide signals. Analysis by MALDI-MS imaging demonstrated that the relative intensities of markers varied across a time course (0-5 min) when the tissues were not stabilized by heat treatment. However, the same markers were seen to be stabilized when the tissues were heat treated before snap freezing. Intensity profiles for proteins indicative of both degradation and stabilization were generated when samples of treated and nontreated tissues were analyzed by 2-D DIGE, with protein extracted before and after a 10-min warming of samples. Thus, heat treatment of tissues at the time of excision is shown to prevent subsequent uncontrolled degradation of tissues at the proteomic level before any quantitative analysis, and to be compatible with downstream proteomic analysis.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper we consider four alternative approaches to complexity control in feed-forward networks based respectively on architecture selection, regularization, early stopping, and training with noise. We show that there are close similarities between these approaches and we argue that, for most practical applications, the technique of regularization should be the method of choice.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Mixture Density Networks (MDNs) are a well-established method for modelling the conditional probability density which is useful for complex multi-valued functions where regression methods (such as MLPs) fail. In this paper we extend earlier research of a regularisation method for a special case of MDNs to the general case using evidence based regularisation and we show how the Hessian of the MDN error function can be evaluated using R-propagation. The method is tested on two data sets and compared with early stopping.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This technical report contains all technical information and results from experiments where Mixture Density Networks (MDN) using an RBF network and fixed kernel means and variances were used to infer the wind direction from satellite data from the ersII weather satellite. The regularisation is based on the evidence framework and three different approximations were used to estimate the regularisation parameter. The results were compared with the results by `early stopping'.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Mixture Density Networks are a principled method to model conditional probability density functions which are non-Gaussian. This is achieved by modelling the conditional distribution for each pattern with a Gaussian Mixture Model for which the parameters are generated by a neural network. This thesis presents a novel method to introduce regularisation in this context for the special case where the mean and variance of the spherical Gaussian Kernels in the mixtures are fixed to predetermined values. Guidelines for how these parameters can be initialised are given, and it is shown how to apply the evidence framework to mixture density networks to achieve regularisation. This also provides an objective stopping criteria that can replace the `early stopping' methods that have previously been used. If the neural network used is an RBF network with fixed centres this opens up new opportunities for improved initialisation of the network weights, which are exploited to start training relatively close to the optimum. The new method is demonstrated on two data sets. The first is a simple synthetic data set while the second is a real life data set, namely satellite scatterometer data used to infer the wind speed and wind direction near the ocean surface. For both data sets the regularisation method performs well in comparison with earlier published results. Ideas on how the constraint on the kernels may be relaxed to allow fully adaptable kernels are presented.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Mixture Density Networks are a principled method to model conditional probability density functions which are non-Gaussian. This is achieved by modelling the conditional distribution for each pattern with a Gaussian Mixture Model for which the parameters are generated by a neural network. This thesis presents a novel method to introduce regularisation in this context for the special case where the mean and variance of the spherical Gaussian Kernels in the mixtures are fixed to predetermined values. Guidelines for how these parameters can be initialised are given, and it is shown how to apply the evidence framework to mixture density networks to achieve regularisation. This also provides an objective stopping criteria that can replace the `early stopping' methods that have previously been used. If the neural network used is an RBF network with fixed centres this opens up new opportunities for improved initialisation of the network weights, which are exploited to start training relatively close to the optimum. The new method is demonstrated on two data sets. The first is a simple synthetic data set while the second is a real life data set, namely satellite scatterometer data used to infer the wind speed and wind direction near the ocean surface. For both data sets the regularisation method performs well in comparison with earlier published results. Ideas on how the constraint on the kernels may be relaxed to allow fully adaptable kernels are presented.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Objective: The number of pharmaceutical items issued on prescription is continually rising and contributing to spiralling healthcare costs. Although there is some data highlighting the quantity, in terms of weight of medicines returned specifically to community pharmacies, little is known about the specific details of such returns or other destinations for wasted medications. This pilot study has been designed to investigate the types and amounts of medicines returned to both general practices (GPs) and associated local community pharmacies determining the reasons why these medicines have been returned. Method: The study was conducted in eight community pharmacies and five GP surgeries within East Birmingham over a 4-week period. Main outcome Measure: Reason for return and details of returned medication. Results: A total of 114 returns were made during the study: 24 (21.1) to GP surgeries and 90 (78.9) to community pharmacies. The total returns comprised 340 items, of which 42 (12.4) were returned to GPs and 298 (87.6) to pharmacies, with the mean number of items per return being 1.8 and 3.3, respectively. Half of the returns in the study were attributed to the doctor changing or stopping the medicine; 23.7 of returns were recorded as excess supplies or clearout often associated with patients' death and 3.5 of returns were related to adverse drug reactions. Cardiovascular drugs were most commonly returned, amounting to 28.5 of the total drugs returned during the study. Conclusions: The results from this pilot study indicate that unused medicines impose a significant financial burden on the National Health Service as well as a social burden on the United Kingdom population. Further studies are examining the precise nature of returned medicines and possible solutions to these issues. © Springer 2005.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Industrial development has had a major role in creating the situation where bio-diverse materials and services essential for sustaining business are under threat. A key contributory factor to biodiversity decline comes from the cumulative impacts of extended supply chain business operations. In order to contribute to stopping this decline, the industrial world needs to form a better understanding of the way it utilizes the business and biodiversity agenda in its wider operations. This thesis investigates the perceptions and attitudes to biodiversity from government, society and a wide cross-section of industry. The research includes the extent of corporate attention to and use of environmental business tools and guidelines in reporting on biodiversity issues. A case study of three companies from different industrial sectors is undertaken to observe procurement and related environmental management of their supply chains. The use of accredited and non-accredited environmental management systems (EMS) are analysed as frameworks for introducing biodiversity aspects into supply chain management. The outcome is a methodology, which can be used either as a bespoke in-house biodiversity management system or within an accredited ISO 14001 EMS, for incorporating the assessment and management of the potential risks and opportunities involving environmental impacts on biodiversity of supply chain companies.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The passage number and origin of two populations of Caco-2 cells influence their enterocyte-like characteristics. Caco-2 cells of passage number >90 from Novartis pharmaceutical company possess higher levels of expression of alkaline phosphatase and P-glycoprotein and a greater cellular uptake of Gly-1.-Pro than those of passage number <40 from the American Type Tissue Culture collection. High P-gp expressing Caco-2 cells have been developed through stepwise selection of the cells with doxonibicin. This newly-developed cell line (hereafter referred to as Type I) possesses approximately twice as much P-gp protein than non-exposed cells, restricts the transepithelial transport of vincristine in the apical-to-basolateral direction whilst facilitating its transport in the reverse direction and accumulates less vincristine than non-exposed cells. There is no apparent evidence of the co-existence of the multidrug resistance protein (MIT) in Type I cells to account for the above-listed observations. Stopping the exposure for more than 28 days decreases the P-gp protein expression in previously doxorubicin-exposed Type I Caco-2 cells and reduces the magnitude of vincristine transepithelial fluxes in both directions to the levels that are almost similar to those of non-exposed cells. Exposing Caco-2 cells to 0.25 JAM la, 25-dihydroxyvitamin D3 induces their expression of cytochrome P450 3A4 protein to the level that is equivalent to that from isolated human jejunal cells. Under the same treatment, doxorubiein-exposed (Type I) cells metabolise naidazolam poorly and less extensively compared to non-exposed cells, suggesting that there is no such co-regulation of P-gp and CYP3A4 in Caco-2 cells. However, there is evidence which suggests CYP3A metabolises mida_zolam into 1- and 4-hydroxymidazolam, the latter may possibly be a P-gp substrate and is transported extracellularly by P-gp, supporting the hypothesis of P-gp-CYP3A4 synergistic roles in keeping xenobiotics out of the body. Doxoru.bicin-exposed (Type I) cells are less effective in translocating L-proline and glycyl-L-proline across the cell mono layers.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We investigate two numerical procedures for the Cauchy problem in linear elasticity, involving the relaxation of either the given boundary displacements (Dirichlet data) or the prescribed boundary tractions (Neumann data) on the over-specified boundary, in the alternating iterative algorithm of Kozlov et al. (1991). The two mixed direct (well-posed) problems associated with each iteration are solved using the method of fundamental solutions (MFS), in conjunction with the Tikhonov regularization method, while the optimal value of the regularization parameter is chosen via the generalized cross-validation (GCV) criterion. An efficient regularizing stopping criterion which ceases the iterative procedure at the point where the accumulation of noise becomes dominant and the errors in predicting the exact solutions increase, is also presented. The MFS-based iterative algorithms with relaxation are tested for Cauchy problems for isotropic linear elastic materials in various geometries to confirm the numerical convergence, stability, accuracy and computational efficiency of the proposed method.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We propose two algorithms involving the relaxation of either the given Dirichlet data or the prescribed Neumann data on the over-specified boundary, in the case of the alternating iterative algorithm of ` 12 ` 12 `$12 `&12 `#12 `^12 `_12 `%12 `~12 *Kozlov91 applied to Cauchy problems for the modified Helmholtz equation. A convergence proof of these relaxation methods is given, along with a stopping criterion. The numerical results obtained using these procedures, in conjunction with the boundary element method (BEM), show the numerical stability, convergence, consistency and computational efficiency of the proposed methods.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We propose two algorithms involving the relaxation of either the given Dirichlet data (boundary displacements) or the prescribed Neumann data (boundary tractions) on the over-specified boundary in the case of the alternating iterative algorithm of Kozlov et al. [16] applied to Cauchy problems in linear elasticity. A convergence proof of these relaxation methods is given, along with a stopping criterion. The numerical results obtained using these procedures, in conjunction with the boundary element method (BEM), show the numerical stability, convergence, consistency and computational efficiency of the proposed method.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The inverse problem of determining a spacewise dependent heat source, together with the initial temperature for the parabolic heat equation, using the usual conditions of the direct problem and information from two supplementary temperature measurements at different instants of time is studied. These spacewise dependent temperature measurements ensure that this inverse problem has a unique solution, despite the solution being unstable, hence the problem is ill-posed. We propose an iterative algorithm for the stable reconstruction of both the initial data and the source based on a sequence of well-posed direct problems for the parabolic heat equation, which are solved at each iteration step using the boundary element method. The instability is overcome by stopping the iterations at the first iteration for which the discrepancy principle is satisfied. Numerical results are presented for a typical benchmark test example, which has the input measured data perturbed by increasing amounts of random noise. The numerical results show that the proposed procedure gives accurate numerical approximations in relatively few iterations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, three iterative procedures (Landweber-Fridman, conjugate gradient and minimal error methods) for obtaining a stable solution to the Cauchy problem in slow viscous flows are presented and compared. A section is devoted to the numerical investigations of these algorithms. There, we use the boundary element method together with efficient stopping criteria for ceasing the iteration process in order to obtain stable solutions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The inverse problem of determining a spacewise-dependent heat source for the parabolic heat equation using the usual conditions of the direct problem and information from one supplementary temperature measurement at a given instant of time is studied. This spacewise-dependent temperature measurement ensures that this inverse problem has a unique solution, but the solution is unstable and hence the problem is ill-posed. We propose a variational conjugate gradient-type iterative algorithm for the stable reconstruction of the heat source based on a sequence of well-posed direct problems for the parabolic heat equation which are solved at each iteration step using the boundary element method. The instability is overcome by stopping the iterative procedure at the first iteration for which the discrepancy principle is satisfied. Numerical results are presented which have the input measured data perturbed by increasing amounts of random noise. The numerical results show that the proposed procedure yields stable and accurate numerical approximations after only a few iterations.