906 resultados para Sensitivity analysis, Rabbit SAN cell, Mathematical model
Resumo:
Our media is saturated with claims of ``facts'' made from data. Database research has in the past focused on how to answer queries, but has not devoted much attention to discerning more subtle qualities of the resulting claims, e.g., is a claim ``cherry-picking''? This paper proposes a Query Response Surface (QRS) based framework that models claims based on structured data as parameterized queries. A key insight is that we can learn a lot about a claim by perturbing its parameters and seeing how its conclusion changes. This framework lets us formulate and tackle practical fact-checking tasks --- reverse-engineering vague claims, and countering questionable claims --- as computational problems. Within the QRS based framework, we take one step further, and propose a problem along with efficient algorithms for finding high-quality claims of a given form from data, i.e. raising good questions, in the first place. This is achieved to using a limited number of high-valued claims to represent high-valued regions of the QRS. Besides the general purpose high-quality claim finding problem, lead-finding can be tailored towards specific claim quality measures, also defined within the QRS framework. An example of uniqueness-based lead-finding is presented for ``one-of-the-few'' claims, landing in interpretable high-quality claims, and an adjustable mechanism for ranking objects, e.g. NBA players, based on what claims can be made for them. Finally, we study the use of visualization as a powerful way of conveying results of a large number of claims. An efficient two stage sampling algorithm is proposed for generating input of 2d scatter plot with heatmap, evalutaing a limited amount of data, while preserving the two essential visual features, namely outliers and clusters. For all the problems, we present real-world examples and experiments that demonstrate the power of our model, efficiency of our algorithms, and usefulness of their results.
Resumo:
It is well known that during alloy solidification, convection currents close to the so-lidification front have an influence on the structure of dendrites, the local solute concentration, the pattern of solid segregation, and eventually the microstructure of the casting and hence its mechanical properties. Controlled stirring of the melt in continuous casting or in ingot solidification is thought to have a beneficial effect. Free convection currents occur naturally due to temperature differences in the melt and for any given configuration, their strength is a function of the degree of superheat present. A more controlled forced convection current can be induced using electro-magnetic stirring. The authors have applied their Control-Volume based MHD method [1, 2] to the problem of tin solidification in an annular crucible with a water-cooled inner wall and a resistance heated outer one, for both free and forced convection situations and for various degrees of superheat. This problem was studied experimentally by Vives and Perry [3] who obtained temperature measurements, front positions and maps of electro-magnetic body force for a range of superheat values. The results of the mathematical model are compared critically against the experimental ones, in order to validate the model and also to demonstrate the usefulness of the coupled solution technique followed, as a predictive tool and a design aid. Figs 6, refs 19.
Resumo:
In this paper, a couple mechanical-acoustic system of equations is solved to determine the relationship between emitted sound and damage mechanisims in paper under controlled stress conditions. The simple classical expression describing the frequency of a plucked string to its material properties is used to generate a numberical representation of the microscopic structue of the paper, and the resulting numerical model is then used to simulate the vibration of a range of simple fibre structures when undergoing two distinct types of damange mechanisms: (a)fibre/fibre bond failure, (b) fibre failure. The numercial results are analysed to determine whether there is any detectable systematic difference between the resulting acoustic emissions of the two damage processes. Fourier techniques are then used to compare th computeed results against experimental measurements. Distinct frequency components identifying each type of damage are shown to exist, and in this respect theory and experiments show good correspondece. Hence, it is shown, that althrough the mathematical model represents a grossly-simplified view of the complex structure of the paper, it nevertheless provides a good understanding of the underlying micro-mechanisms characterising its proeperties as a stress-resisting structure. Use of the model and acoompanying software will enable operators to identify approaching failure conditions in the continuous production of paper from emitted sound signals and take preventative action.
Resumo:
The recognition that urban groundwater is a potentially valuable resource for potable and industrial uses due to growing pressures on perceived less polluted rural groundwater has led to a requirement to assess the groundwater contamination risk in urban areas from industrial contaminants such as chlorinated solvents. The development of a probabilistic risk based management tool that predicts groundwater quality at potential new urban boreholes is beneficial in determining the best sites for future resource development. The Borehole Optimisation System (BOS) is a custom Geographic Information System (GIs) application that has been developed with the objective of identifying the optimum locations for new abstraction boreholes. BOS can be applied to any aquifer subject to variable contamination risk. The system is described in more detail by Tait et al. [Tait, N.G., Davison, J.J., Whittaker, J.J., Lehame, S.A. Lerner, D.N., 2004a. Borehole Optimisation System (BOS) - a GIs based risk analysis tool for optimising the use of urban groundwater. Environmental Modelling and Software 19, 1111-1124]. This paper applies the BOS model to an urban Permo-Triassic Sandstone aquifer in the city centre of Nottingham, UK. The risk of pollution in potential new boreholes from the industrial chlorinated solvent tetrachloroethene (PCE) was assessed for this region. The risk model was validated against contaminant concentrations from 6 actual field boreholes within the study area. In these studies the model generally underestimated contaminant concentrations. A sensitivity analysis showed that the most responsive model parameters were recharge, effective porosity and contaminant degradation rate. Multiple simulations were undertaken across the study area in order to create surface maps indicating areas of low PCE concentrations, thus indicating the best locations to place new boreholes. Results indicate that northeastern, eastern and central regions have the lowest potential PCE concentrations in abstraction groundwater and therefore are the best sites for locating new boreholes. These locations coincide with aquifer areas that are confined by low permeability Mercia Mudstone deposits. Conversely southern and northwestern areas are unconfined and have shallower depth to groundwater. These areas have the highest potential PCE concentrations. These studies demonstrate the applicability of BOS as a tool for informing decision makers on the development of urban groundwater resources. (c) 2007 Elsevier Ltd. All rights reserved.
Resumo:
Antarctic krill is a cold water species, an increasingly important fishery resource and a major prey item for many fish, birds and mammals in the Southern Ocean. The fishery and the summer foraging sites of many of these predators are concentrated between 0 degrees and 90 degrees W. Parts of this quadrant have experienced recent localised sea surface warming of up to 0.2 degrees C per decade, and projections suggest that further widespread warming of 0.27 degrees to 1.08 degrees C will occur by the late 21st century. We assessed the potential influence of this projected warming on Antarctic krill habitat with a statistical model that links growth to temperature and chlorophyll concentration. The results divide the quadrant into two zones: a band around the Antarctic Circumpolar Current in which habitat quality is particularly vulnerable to warming, and a southern area which is relatively insensitive. Our analysis suggests that the direct effects of warming could reduce the area of growth habitat by up to 20%. The reduction in growth habitat within the range of predators, such as Antarctic fur seals, that forage from breeding sites on South Georgia could be up to 55%, and the habitat's ability to support Antarctic krill biomass production within this range could be reduced by up to 68%. Sensitivity analysis suggests that the effects of a 50% change in summer chlorophyll concentration could be more significant than the direct effects of warming. A reduction in primary production could lead to further habitat degradation but, even if chlorophyll increased by 50%, projected warming would still cause some degradation of the habitat accessible to predators. While there is considerable uncertainty in these projections, they suggest that future climate change could have a significant negative effect on Antarctic krill growth habitat and, consequently, on Southern Ocean biodiversity and ecosystem services.
Resumo:
We applied coincident Earth observation data collected during 2008 and 2009 from multiple sensors (RA2, AATSR and MERIS, mounted on the European Space Agency satellite Envisat) to characterise environmental conditions and integrated sea-air fluxes of CO2 in three Arctic seas (Greenland, Barents, Kara). We assessed net CO2 sink sensitivity due to changes in temperature, salinity and sea ice duration arising from future climate scenarios. During the study period the Greenland and Barents seas were net sinks for atmospheric CO2, with integrated sea-air fluxes of -36 +/- 14 and -11 +/- 5 Tg C yr(-1), respectively, and the Kara Sea was a weak net CO2 source with an integrated sea-air flux of +2.2 +/- 1.4 TgC yr(-1). The combined integrated CO2 sea-air flux from all three was -45 +/- 18 TgC yr(-1). In a sensitivity analysis we varied temperature, salinity and sea ice duration. Variations in temperature and salinity led to modification of the transfer velocity, solubility and partial pressure of CO2 taking into account the resultant variations in alkalinity and dissolved organic carbon (DOC). Our results showed that warming had a strong positive effect on the annual integrated sea-air flux of CO2 (i.e. reducing the sink), freshening had a strong negative effect and reduced sea ice duration had a small but measurable positive effect. In the climate change scenario examined, the effects of warming in just over a decade of climate change up to 2020 outweighed the combined effects of freshening and reduced sea ice duration. Collectively these effects gave an integrated sea-air flux change of +4.0 TgC in the Greenland Sea, +6.0 Tg C in the Barents Sea and +1.7 Tg C in the Kara Sea, reducing the Greenland and Barents sinks by 11% and 53 %, respectively, and increasing the weak Kara Sea source by 81 %. Overall, the regional integrated flux changed by +11.7 Tg C, which is a 26% reduction in the regional sink. In terms of CO2 sink strength, we conclude that the Barents Sea is the most susceptible of the three regions to the climate changes examined. Our results imply that the region will cease to be a net CO2 sink in the 2050s.
Resumo:
The absorption spectra of phytoplankton in the visible domain hold implicit information on the phytoplankton community structure. Here we use this information to retrieve quantitative information on phytoplankton size structure by developing a novel method to compute the exponent of an assumed power-law for their particle-size spectrum. This quantity, in combination with total chlorophyll-a concentration, can be used to estimate the fractional concentration of chlorophyll in any arbitrarily-defined size class of phytoplankton. We further define and derive expressions for two distinct measures of cell size of mixed. populations, namely, the average spherical diameter of a bio-optically equivalent homogeneous population of cells of equal size, and the average equivalent spherical diameter of a population of cells that follow a power-law particle-size distribution. The method relies on measurements of two quantities of a phytoplankton sample: the concentration of chlorophyll-a, which is an operational index of phytoplankton biomass, and the total absorption coefficient of phytoplankton in the red peak of visible spectrum at 676 nm. A sensitivity analysis confirms that the relative errors in the estimates of the exponent of particle size spectra are reasonably low. The exponents of phytoplankton size spectra, estimated for a large set of in situ data from a variety of oceanic environments (similar to 2400 samples), are within a reasonable range; and the estimated fractions of chlorophyll in pico-, nano- and micro-phytoplankton are generally consistent with those obtained by an independent, indirect method based on diagnostic pigments determined using high-performance liquid chromatography. The estimates of cell size for in situ samples dominated by different phytoplankton types (diatoms, prymnesiophytes, Prochlorococcus, other cyanobacteria and green algae) yield nominal sizes consistent with the taxonomic classification. To estimate the same quantities from satellite-derived ocean-colour data, we combine our method with algorithms for obtaining inherent optical properties from remote sensing. The spatial distribution of the size-spectrum exponent and the chlorophyll fractions of pico-, nano- and micro-phytoplankton estimated from satellite remote sensing are in agreement with the current understanding of the biogeography of phytoplankton functional types in the global oceans. This study contributes to our understanding of the distribution and time evolution of phytoplankton size structure in the global oceans.
Resumo:
Abstract The prostanoid biosynthetic enzyme cyclooxygenase-2 (Cox-2) is upregulated in several neuroendocrine tumors. The aim of the current study was to employ a neuroendocrine cell (PC12) model of Cox-2 over-expression to identify gene products that might be implicated in the oncogenic and/or inflammatory actions of this enzyme in the setting of neuroendocrine neoplasia. Expression array and real-time PCR analysis demonstrated that levels of the neuroendocrine marker chromogranin A (CGA) were 2-fold and 3.2-fold higher, respectively, in Cox-2 over-expressing cells (PCXII) vs their control (PCMT) counterparts. Immunocytochemical and immunoblotting analyses confirmed that both intracellular and secreted levels of CGA were elevated in response to Cox-2 induction. Moreover, exogenous addition of prostaglandin E2 (1uÃ?ÂM), mimicked this effect in PCMT cells, while treatment of PCXII cells with the Cox-2 selective inhibitor NS-398 (100 nM) reduced CGA expression levels, thereby confirming the biospecificity of this finding. Levels of neurone specific enolase (NSE) were similar in the two cell lines, suggesting that the effect of Cox-2 on CGA expression was specific and not due to a global enhancement of neuroendocrine marker expression/differentiation. Cox-2-dependent CGA upregulation was associated with significantly increased chromaffin granule number and intracellular and secreted levels of dopamine. CGA promoter-driven reporter gene expression studies provided evidence that prostaglandin E2-dependent upregulation required a proximal cAMP-responsive element (CRE; -71 - -64 bp). This study is the first to demonstrate that Cox-2 upregulates both CGA expression and bioactivity in a neuroendocrine cell line and has major implications for the role of this polypeptide in the pathogenesis of neuroendocrine cancers in which Cox-2 is upregulated.
Resumo:
The results of a study aimed at determining the most important experimental parameters for automated, quantitative analysis of solid dosage form pharmaceuticals (seized and model 'ecstasy' tablets) are reported. Data obtained with a macro-Raman spectrometer were complemented by micro-Raman measurements, which gave information on particle size and provided excellent data for developing statistical models of the sampling errors associated with collecting data as a series of grid points on the tablets' surface. Spectra recorded at single points on the surface of seized MDMA-caffeine-lactose tablets with a Raman microscope (lambda(ex) = 785 nm, 3 mum diameter spot) were typically dominated by one or other of the three components, consistent with Raman mapping data which showed the drug and caffeine microcrystals were ca 40 mum in diameter. Spectra collected with a microscope from eight points on a 200 mum grid were combined and in the resultant spectra the average value of the Raman band intensity ratio used to quantify the MDMA: caffeine ratio, mu(r), was 1.19 with an unacceptably high standard deviation, sigma(r), of 1.20. In contrast, with a conventional macro-Raman system (150 mum spot diameter), combined eight grid point data gave mu(r) = 1.47 with sigma(r) = 0.16. A simple statistical model which could be used to predict sigma(r) under the various conditions used was developed. The model showed that the decrease in sigma(r) on moving to a 150 mum spot was too large to be due entirely to the increased spot diameter but was consistent with the increased sampling volume that arose from a combination of the larger spot size and depth of focus in the macroscopic system. With the macro-Raman system, combining 64 grid points (0.5 mm spacing and 1-2 s accumulation per point) to give a single averaged spectrum for a tablet was found to be a practical balance between minimizing sampling errors and keeping overhead times at an acceptable level. The effectiveness of this sampling strategy was also tested by quantitative analysis of a set of model ecstasy tablets prepared from MDEA-sorbitol (0-30% by mass MDEA). A simple univariate calibration model of averaged 64 point data had R-2 = 0.998 and an r.m.s. standard error of prediction of 1.1% whereas data obtained by sampling just four points on the same tablet showed deviations from the calibration of up to 5%.
Resumo:
The standard linear-quadratic (LQ) survival model for external beam radiotherapy is reviewed with particular emphasis on studying how different schedules of radiation treatment planning may be affected by different tumour repopulation kinetics. The LQ model is further examined in the context of tumour control probability (TCP) models. The application of the Zaider and Minerbo non-Poissonian TCP model incorporating the effect of cellular repopulation is reviewed. In particular the recent development of a cell cycle model within the original Zaider and Minerbo TCP formalism is highlighted. Application of this TCP cell-cycle model in clinical treatment plans is explored and analysed.
Resumo:
The single-cell gel electrophoresis technique or comet assay is widely regarded as a quick and reliable method of analysing DNA damage in individual cells. It has a proven track record from the fields of biomonitoring to nutritional studies. The assay operates by subjecting cells that are fixed in agarose to high salt and detergent lysis, thus removing all the cellular content except the DNA. By relaxing the DNA in an alkaline buffer, strands containing breaks are released from supercoiling. Upon electrophoresis, these strands are pulled out into the agarose, forming a tail which, when stained with a fluorescent dye, can be analysed by fluorescence microscopy. The intensity of this tail reflects the amount of DNA damage sustained. Despite being such an established and widely used assay, there are still many aspects of the comet assay which are not fully understood. The present review looks at how the comet assay is being used, and highlights some of its limitations. The protocol itself varies among laboratories, so results from similar studies may vary. Given such discrepancies, it would be attractive to break the assay into components to generate a mathematical model to investigate specific parameters.
Resumo:
The three-dimensional (3D) weaving process offers the ability to tailor the mechanical properties via design of the weave architecture. One repeat of the 3D woven fabric is represented by the unit cell. The model accepts basic weaver and material manufacturer data as inputs in order to calculate the geometric characteristics of the 3D woven unit cell. The specific weave architecture manufactured and subsequently modelled had an angle interlock type binding configuration. The modelled result was shown to have a close approximation compared to the experimentally measured values and highlighted the importance of the representation of the binder tow path.
Resumo:
In this paper, we analyzed a mathematical model of algal-grazer dynamics, including the effect of colony formation, which is an example of phenotypic plasticity. The model consists of three variables, which correspond to the biomasses of unicellular algae, colonial algae, and herbivorous zooplankton. Among these organisms, colonial algae are the main components of algal blooms. This aquatic system has two stable attractors, which can be identified as a zooplankton-dominated (ZD) state and an algal-dominated (AD) state, respectively. Assuming that the handling time of zooplankton on colonial algae increases with the colonial algae biomass, we discovered that bistability can occur within the model system. The applicability of alternative stable states in algae-grazer dynamics as a framework for explaining the algal blooms in real lake ecosystems, thus, seems to depend on whether the assumption mentioned above is met in natural circumstances.
Resumo:
To understand the molecular etiology of osteosarcoma, we isolated and characterized a human osteosarcoma cell line (OS1). OS1 cells have high osteogenic potential in differentiation induction media. Molecular analysis reveals OS1 cells express the pocket protein pRB and the runt-related transcription factor Runx2. Strikingly, Runx2 is expressed at higher levels in OS1 cells than in human fetal osteoblasts. Both pRB and Runx2 have growth suppressive potential in osteoblasts and are key factors controlling competency for osteoblast differentiation. The high levels of Runx2 clearly suggest osteosarcomas may form from committed osteoblasts that have bypassed growth restrictions normally imposed by Runx2. Interestingly, OS1 cells do not exhibit p53 expression and thus lack a functional p53/p21 DNA damage response pathway as has been observed for other osteosarcoma cell types. Absence of this pathway predicts genomic instability and/or vulnerability to secondary mutations that may counteract the anti-proliferative activity of Runx2 that is normally observed in osteoblasts. We conclude OS1 cells provide a valuable cell culture model to examine molecular events that are responsible for the pathologic conversion of phenotypically normal osteoblast precursors into osteosarcoma cells.
Resumo:
Tissue microarray (TMA) is a high throughput analysis tool to identify new diagnostic and prognostic markers in human cancers. However, standard automated method in tumour detection on both routine histochemical and immunohistochemistry (IHC) images is under developed. This paper presents a robust automated tumour cell segmentation model which can be applied to both routine histochemical tissue slides and IHC slides and deal with finer pixel-based segmentation in comparison with blob or area based segmentation by existing approaches. The presented technique greatly improves the process of TMA construction and plays an important role in automated IHC quantification in biomarker analysis where excluding stroma areas is critical. With the finest pixel-based evaluation (instead of area-based or object-based), the experimental results show that the proposed method is able to achieve 80% accuracy and 78% accuracy in two different types of pathological virtual slides, i.e., routine histochemical H&E and IHC images, respectively. The presented technique greatly reduces labor-intensive workloads for pathologists and highly speeds up the process of TMA construction and provides a possibility for fully automated IHC quantification.