937 resultados para Viability kernel
Resumo:
This paper derives an efficient algorithm for constructing sparse kernel density (SKD) estimates. The algorithm first selects a very small subset of significant kernels using an orthogonal forward regression (OFR) procedure based on the D-optimality experimental design criterion. The weights of the resulting sparse kernel model are then calculated using a modified multiplicative nonnegative quadratic programming algorithm. Unlike most of the SKD estimators, the proposed D-optimality regression approach is an unsupervised construction algorithm and it does not require an empirical desired response for the kernel selection task. The strength of the D-optimality OFR is owing to the fact that the algorithm automatically selects a small subset of the most significant kernels related to the largest eigenvalues of the kernel design matrix, which counts for the most energy of the kernel training data, and this also guarantees the most accurate kernel weight estimate. The proposed method is also computationally attractive, in comparison with many existing SKD construction algorithms. Extensive numerical investigation demonstrates the ability of this regression-based approach to efficiently construct a very sparse kernel density estimate with excellent test accuracy, and our results show that the proposed method compares favourably with other existing sparse methods, in terms of test accuracy, model sparsity and complexity, for constructing kernel density estimates.
Resumo:
Much recent interest has focused on the potential of flavonoids to interact with intracellular signaling pathways such as with the mitogen-activated protein kinase cascade. We have investigated whether the observed strong neurotoxic potential of quercetin in primary cortical neurons may occur via specific and sensitive interactions within neuronal mitogen-activated protein kinase and Akt/protein kinase B (PKB) signaling cascades, both implicated in neuronal apoptosis. Quercetin induced potent inhibition of both Akt/PKB and ERK phosphorylation, resulting in reduced phosphorylation of BAD and a strong activation of caspase-3. High quercetin concentrations (30 microM) led to sustained loss of Akt phosphorylation and subsequent Akt cleavage by caspase-3, whereas at lower concentrations (<10 microM) the inhibition of Akt phosphorylation was transient and eventually returned to basal levels. Lower levels of quercetin also induced strong activation of the pro-survival transcription factor cAMP-responsive element-binding protein, although this did not prevent neuronal damage. O-Methylated quercetin metabolites inhibited Akt/PKB to lesser extent and did not induce such strong activation of caspase-3, which was reflected in the lower amount of damage they inflicted on neurons. In contrast, neither quercetin nor its O-methylated metabolites had any measurable effect on c-Jun N-terminal kinase phosphorylation. The glucuronide of quercetin was not toxic and did not evoke any alterations in neuronal signaling, probably reflecting its inability to enter neurons. Together these data suggest that quercetin and to a lesser extent its O-methylated metabolites may induce neuronal death via a mechanism involving an inhibition of neuronal survival signaling through the inhibition of both Akt/PKB and ERK rather than by an activation of the c-Jun N-terminal kinase-mediated death pathway.
Resumo:
Due to the fact that probiotic cells need to be alive when they are consumed, culture-based analysis (plate count) is critical in ascertaining the quality (numbers of viable cells) of probiotic products. Since probiotic cells are typically stressed, due to various factors related to their production, processing and formulation, the standard methodology for total plate counts tends to underestimate the cell numbers of these products. Furthermore, products such as microencapsulated cultures require modifications in the release and sampling procedure in order to correctly estimate viable counts. This review examines the enumeration of probiotic bacteria in the following commercial products: powders, microencapsulated cultures, frozen concentrates, capsules, foods and beverages. The parameters which are specifically examined include: sample preparation (rehydration, thawing), dilutions (homogenization, media) and plating (media, incubation) procedures. Recommendations are provided for each of these analytical steps to improve the accuracy of the analysis. Although the recommendations specifically target the analysis of probiotics, many will apply to the analysis of commercial lactic starter cultures used in food fermentations as well.
Resumo:
Purpose – The purpose of this paper is to investigate the effect of choices of model structure and scale in development viability appraisal. The paper addresses two questions concerning the application of development appraisal techniques to viability modelling within the UK planning system. The first relates to the extent to which, given intrinsic input uncertainty, the choice of model structure significantly affects model outputs. The second concerns the extent to which, given intrinsic input uncertainty, the level of model complexity significantly affects model outputs. Design/methodology/approach – Monte Carlo simulation procedures are applied to a hypothetical development scheme in order to measure the effects of model aggregation and structure on model output variance. Findings – It is concluded that, given the particular scheme modelled and unavoidably subjective assumptions of input variance, that simple and simplistic models may produce similar outputs to more robust and disaggregated models. Evidence is found of equifinality in the outputs of a simple, aggregated model of development viability relative to more complex, disaggregated models. Originality/value – Development viability appraisal has become increasingly important in the planning system. Consequently, the theory, application and outputs from development appraisal are under intense scrutiny from a wide range of users. However, there has been very little published evaluation of viability models. This paper contributes to the limited literature in this area.
Resumo:
Aims: Therapeutic limbal epithelial stem cells could be managed more efficiently if clinically validated batches were transported for ‘on-demand’ use. Materials & methods: In this study, corneal epithelial cell viability in calcium alginate hydrogels was examined under cell culture, ambient and chilled conditions for up to 7 days. Results: Cell viability improved as gel internal pore size increased, and was further enhanced with modification of the gel from a mass to a thin disc. Ambient storage conditions were optimal for supporting cell viability in gel discs. Cell viability in gel discs was significantly enhanced with increases in pore size mediated by hydroxyethyl cellulose. Conclusion: Our novel methodology of controlling alginate gel shape and pore size together provides a more practical and economical alternative to established corneal tissue/cell storage methods.
Resumo:
The aim of this paper is to critically examine the application of development appraisal to viability assessment in the planning system. This evaluation is of development appraisal models in general and also their use in particular applications associated with estimating planning obligation capacity. The paper is organised into four themes: · The context and conceptual basis for development viability appraisal · A review of development viability appraisal methods · A discussion of selected key inputs into a development viability appraisal · A discussion of the applications of development viability appraisals in the planning system It is assumed that readers are familiar with the basic models and information needs of development viability appraisal rather than at the cutting edge of practice and/or academe
Resumo:
This paper investigates the effect of choices of model structure and scale in development viability appraisal. The paper addresses two questions concerning the application of development appraisal techniques to viability modelling within the UK planning system. The first relates to the extent to which, given intrinsic input uncertainty, the choice of model structure significantly affects model outputs. The second concerns the extent to which, given intrinsic input uncertainty, the level of model complexity significantly affects model outputs. Monte Carlo simulation procedures are applied to a hypothetical development scheme in order to measure the effects of model aggregation and structure on model output variance. It is concluded that, given the particular scheme modelled and unavoidably subjective assumptions of input variance, that simple and simplistic models may produce similar outputs to more robust and disaggregated models.
Resumo:
The presence of resident Langerhans cells (LCs) in the epidermis makes the skin an attractive target for DNA vaccination. However, reliable animal models for cutaneous vaccination studies are limited. We demonstrate an ex vivo human skin model for cutaneous DNA vaccination which can potentially bridge the gap between pre-clinical in vivo animal models and clinical studies. Cutaneous transgene expression was utilised to demonstrate epidermal tissue viability in culture. LC response to the culture environment was monitored by immunohistochemistry. Full-thickness and split-thickness skin remained genetically viable in culture for at least 72 h in both phosphate-buffered saline (PBS) and full organ culture medium (OCM). The epidermis of explants cultured in OCM remained morphologically intact throughout the culture duration. LCs in full-thickness skin exhibited a delayed response (reduction in cell number and increase in cell size) to the culture conditions compared with split-thickness skin, whose response was immediate. In conclusion, excised human skin can be cultured for a minimum of 72 h for analysis of gene expression and immune cell activation. However, the use of split-thickness skin for vaccine formulation studies may not be appropriate because of the nature of the activation. Full-thickness skin explants are a more suitable model to assess cutaneous vaccination ex vivo.
Resumo:
Corneal tissue engineering has improved dramatically over recent years. It is now possible to apply these technological advancements to the development of superior in vitro ocular surface models to reduce animal testing. We aim to show the effect different substrates can have on the viability of expanded corneal epithelial cells and that those which more accurately mimic the stromal surface provide the most protection against toxic assault. Compressed collagen gel as a substrate for the expansion of a human epithelial cell line was compared against two well-known substrates for modeling the ocular surface (polycarbonate membrane and conventional collagen gel). Cells were expanded over 10 days at which point cell stratification, cell number and expression of junctional proteins were assessed by electron microscopy, immunohistochemistry and RT-PCR. The effect of increasing concentrations of sodium lauryl sulphate on epithelial cell viability was quantified by MTT assay. Results showed improvement in terms of stratification, cell number and tight junction expression in human epithelial cells expanded upon either the polycarbonate membrane or compressed collagen gel when compared to a the use of a conventional collagen gel. However, cell viability was significantly higher in cells expanded upon the compressed collagen gel. We conclude that the more naturalistic composition and mechanical properties of compressed collagen gels produces a more robust corneal model.
Resumo:
We present a new subcortical structure shape modeling framework using heat kernel smoothing constructed with the Laplace-Beltrami eigenfunctions. The cotan discretization is used to numerically obtain the eigenfunctions of the Laplace-Beltrami operator along the surface of subcortical structures of the brain. The eigenfunctions are then used to construct the heat kernel and used in smoothing out measurements noise along the surface. The proposed framework is applied in investigating the influence of age (38-79 years) and gender on amygdala and hippocampus shape. We detected a significant age effect on hippocampus in accordance with the previous studies. In addition, we also detected a significant gender effect on amygdala. Since we did not find any such differences in the traditional volumetric methods, our results demonstrate the benefit of the current framework over traditional volumetric methods.
Resumo:
This paper investigates the application and use of development viability models in the formation of planning policies in the UK. Particular attention is paid to three key areas; the assumed development scheme in development viability models, the use of forecasts and the debate concerning Threshold Land Value. The empirical section reports on the results of an interview survey involving the main producers of development viability models and appraisals. It is concluded that, although development viability models have intrinsic limitations associated with model composition and input uncertainties, the most significant limitations are related to the ways that they have been adapted for use in the planning system. In addition, it is suggested that the contested nature of Threshold Land Value is an example of calculative practices providing a façade of technocratic rationality in the planning system.
Resumo:
Area-wide development viability appraisals are undertaken to determine the economic feasibility of policy targets in relation to planning obligations. Essentially, development viability appraisals consist of a series of residual valuations of hypothetical development sites across a local authority area at a particular point in time. The valuations incorporate the estimated financial implications of the proposed level of planning obligations. To determine viability the output land values are benchmarked against threshold land value and therefore the basis on which this threshold is established and the level at which it is set is critical to development viability appraisal at the policy-setting (area-wide) level. Essentially it is an estimate of the value at which a landowner would be prepared to sell. If the estimated site values are higher than the threshold land value the policy target is considered viable. This paper investigates the effectiveness of existing methods of determining threshold land value. They will be tested against the relationship between development value and costs. Modelling reveals that threshold land value that is not related to shifts in development value renders marginal sites unviable and fails to collect proportionate planning obligations from high value/low cost sites. Testing the model against national average house prices and build costs reveals the high degree of volatility in residual land values over time and underlines the importance of making threshold land value relative to the main driver of this volatility, namely development value.
Resumo:
Purpose: To quantify to what extent the new registration method, DARTEL (Diffeomorphic Anatomical Registration Through Exponentiated Lie Algebra), may reduce the smoothing kernel width required and investigate the minimum group size necessary for voxel-based morphometry (VBM) studies. Materials and Methods: A simulated atrophy approach was employed to explore the role of smoothing kernel, group size, and their interactions on VBM detection accuracy. Group sizes of 10, 15, 25, and 50 were compared for kernels between 0–12 mm. Results: A smoothing kernel of 6 mm achieved the highest atrophy detection accuracy for groups with 50 participants and 8–10 mm for the groups of 25 at P < 0.05 with familywise correction. The results further demonstrated that a group size of 25 was the lower limit when two different groups of participants were compared, whereas a group size of 15 was the minimum for longitudinal comparisons but at P < 0.05 with false discovery rate correction. Conclusion: Our data confirmed DARTEL-based VBM generally benefits from smaller kernels and different kernels perform best for different group sizes with a tendency of smaller kernels for larger groups. Importantly, the kernel selection was also affected by the threshold applied. This highlighted that the choice of kernel in relation to group size should be considered with care.