836 resultados para factor analytic model
Resumo:
The mechanism of muscle protein catabolism induced by proteolysis-inducing factor, produced by cachexia-inducing murine and human tumours has been studied in vitro using C2C12 myoblasts and myotubes. In both myoblasts and myotubes protein degradation was enhanced by proteolysis-inducing factor after 24 h incubation. In myoblasts this followed a bell-shaped dose-response curve with maximal effects at a proteolysis-inducing factor concentration between 2 and 4 nM, while in myotubes increased protein degradation was seen at all concentrations of proteolysis-inducing factor up to 10 nM, again with a maximum of 4 nM proteolysis-inducing factor. Protein degradation induced by proteolysis-inducing factor was completely attenuated in the presence of cycloheximide (1 μM), suggesting a requirement for new protein synthesis. In both myoblasts and myotubes protein degradation was accompanied by an increased expression of the α-type subunits of the 20S proteasome as well as functional activity of the proteasome, as determined by the 'chymotrypsin-like' enzyme activity. There was also an increased expression of the 19S regulatory complex as well as the ubiquitin-conjugating enzyme (E214k), and in myotubes a decrease in myosin expression was seen with increasing concentrations of proteolysis-inducing factor. These results show that proteolysis-inducing factor co-ordinately upregulates both ubiquitin conjugation and proteasome activity in both myoblasts and myotubes and may play an important role in the muscle wasting seen in cancer cachexia. © 2002 Cancer Research UK.
Resumo:
I model the forward premium in the U.K. gilt-edged market over the period 1982–96 using a two-factor general equilibrium model of the term structure of interest rates. The model permits the decomposition of the forward premium into separate components representing interest rate expectations, the risk premia associated with each of the underlying factors, and terms capturing the direct impact of the variances of the factors on the shape of the forward curve.
Resumo:
Background: Activated factor XIII (FXIIIa), a transglutaminase, introduces fibrin-fibrin and fibrin-inhibitor cross-links, resulting in more mechanically stable clots. The impact of cross-linking on resistance to fibrinolysis has proved challenging to evaluate quantitatively. Methods: We used a whole blood model thrombus system to characterize the role of cross-linking in resistance to fibrinolytic degradation. Model thrombi, which mimic arterial thrombi formed in vivo, were prepared with incorporated fluorescently labeled fibrinogen, in order to allow quantification of fibrinolysis as released fluorescence units per minute. Results: A site-specific inhibitor of transglutaminases, added to blood from normal donors, yielded model thrombi that lysed more easily, either spontaneously or by plasminogen activators. This was observed both in the cell/platelet-rich head and fibrin-rich tail. Model thrombi from an FXIII-deficient patient lysed more quickly than normal thrombi; replacement therapy with FXIII concentrate normalized lysis. In vitro addition of purified FXIII to the patient's preprophylaxis blood, but not to normal control blood, resulted in more stable thrombi, indicating no further efficacy of supraphysiologic FXIII. However, addition of tissue transglutaminase, which is synthesized by endothelial cells, generated thrombi that were more resistant to fibrinolysis; this may stabilize mural thrombi in vivo. Conclusions: Model thrombi formed under flow, even those prepared as plasma 'thrombi', reveal the effect of FXIII on fibrinolysis. Although very low levels of FXIII are known to produce mechanical clot stability, and to achieve ?-dimerization, they appear to be suboptimal in conferring full resistance to fibrinolysis.
Resumo:
Activation of the hypoxia-inducible factor (HIF) pathway is a critical step in the transcriptional response to hypoxia. Although many of the key proteins involved have been characterised, the dynamics of their interactions in generating this response remain unclear. In the present study, we have generated a comprehensive mathematical model of the HIF-1a pathway based on core validated components and dynamic experimental data, and confirm the previously described connections within the predicted network topology. Our model confirms previous work demonstrating that the steps leading to optimal HIF-1a transcriptional activity require sequential inhibition of both prolyl- and asparaginyl-hydroxylases. We predict from our model (and confirm experimentally) that there is residual activity of the asparaginyl-hydroxylase FIH (factor inhibiting HIF) at low oxygen tension. Furthermore, silencing FIH under conditions where prolyl-hydroxylases are inhibited results in increased HIF-1a transcriptional activity, but paradoxically decreases HIF-1a stability. Using a core module of the HIF network and mathematical proof supported by experimental data, we propose that asparaginyl hydroxylation confers a degree of resistance upon HIF-1a to proteosomal degradation. Thus, through in vitro experimental data and in silico predictions, we provide a comprehensive model of the dynamic regulation of HIF-1a transcriptional activity by hydroxylases and use its predictive and adaptive properties to explain counter-intuitive biological observations.
Resumo:
In order to bridge the “Semantic gap”, a number of relevance feedback (RF) mechanisms have been applied to content-based image retrieval (CBIR). However current RF techniques in most existing CBIR systems still lack satisfactory user interaction although some work has been done to improve the interaction as well as the search accuracy. In this paper, we propose a four-factor user interaction model and investigate its effects on CBIR by an empirical evaluation. Whilst the model was developed for our research purposes, we believe the model could be adapted to any content-based search system.
Resumo:
The existing method of pipeline health monitoring, which requires an entire pipeline to be inspected periodically, is both time-wasting and expensive. A risk-based model that reduces the amount of time spent on inspection has been presented. This model not only reduces the cost of maintaining petroleum pipelines, but also suggests an efficient design and operation philosophy, construction methodology, and logical insurance plans. The risk-based model uses the analytic hierarchy process (AHP), a multiple-attribute decision-making technique, to identify the factors that influence failure on specific segments and to analyze their effects by determining probability of risk factors. The severity of failure is determined through consequence analysis. From this, the effect of a failure caused by each risk factor can be established in terms of cost, and the cumulative effect of failure is determined through probability analysis. The technique does not totally eliminate subjectivity, but it is an improvement over the existing inspection method.
Resumo:
The existing method of pipeline monitoring, which requires an entire pipeline to be inspected periodically, wastes time and is expensive. A risk-based model that reduces the amount of time spent on inspection has been developed. This model not only reduces the cost of maintaining petroleum pipelines, but also suggests an efficient design and operation philosophy, construction method and logical insurance plans.The risk-based model uses analytic hierarchy process, a multiple attribute decision-making technique, to identify factors that influence failure on specific segments and analyze their effects by determining the probabilities of risk factors. The severity of failure is determined through consequence analysis, which establishes the effect of a failure in terms of cost caused by each risk factor and determines the cumulative effect of failure through probability analysis.
Resumo:
In this letter, we propose an analytical approach to model uplink intercell interference (ICI) in hexagonal grid based orthogonal frequency division multiple access (OFMDA) cellular networks. The key idea is that the uplink ICI from individual cells is approximated with a lognormal distribution with statistical parameters being determined analytically. Accordingly, the aggregated uplink ICI is approximated with another lognormal distribution and its statistical parameters can be determined from those of individual cells using Fenton-Wilkson method. Analytic expressions of uplink ICI are derived with two traditional frequency reuse schemes, namely integer frequency reuse schemes with factor 1 (IFR-1) and factor 3 (IFR-3). Uplink fractional power control and lognormal shadowing are modeled. System performances in terms of signal to interference plus noise ratio (SINR) and spectrum efficiency are also derived. The proposed model has been validated by simulations. © 2013 IEEE.
Resumo:
Recent changes to the legislation on chemicals and cosmetics testing call for a change in the paradigm regarding the current 'whole animal' approach for identifying chemical hazards, including the assessment of potential neurotoxins. Accordingly, since 2004, we have worked on the development of the integrated co-culture of post-mitotic, human-derived neurons and astrocytes (NT2.N/A), for use as an in vitro functional central nervous system (CNS) model. We have used it successfully to investigate indicators of neurotoxicity. For this purpose, we used NT2.N/A cells to examine the effects of acute exposure to a range of test chemicals on the cellular release of brain-derived neurotrophic factor (BDNF). It was demonstrated that the release of this protective neurotrophin into the culture medium (above that of control levels) occurred consistently in response to sub-cytotoxic levels of known neurotoxic, but not non-neurotoxic, chemicals. These increases in BDNF release were quantifiable, statistically significant, and occurred at concentrations below those at which cell death was measureable, which potentially indicates specific neurotoxicity, as opposed to general cytotoxicity. The fact that the BDNF immunoassay is non-invasive, and that NT2.N/A cells retain their functionality for a period of months, may make this system useful for repeated-dose toxicity testing, which is of particular relevance to cosmetics testing without the use of laboratory animals. In addition, the production of NT2.N/A cells without the use of animal products, such as fetal bovine serum, is being explored, to produce a fully-humanised cellular model.
Resumo:
Using survey data from 358 online customers, the study finds that the e-service quality construct conforms to the structure of a third-order factor model that links online service quality perceptions to distinct and actionable dimensions, including (1) website design, (2) fulfilment, (3) customer service, and (4) security/privacy. Each dimension is found to consist of several attributes that define the basis of e-service quality perceptions. A comprehensive specification of the construct, which includes attributes not covered in existing scales, is developed. The study contrasts a formative model consisting of 4 dimensions and 16 attributes against a reflective conceptualization. The results of this comparison indicate that studies using an incorrectly specified model overestimate the importance of certain e-service quality attributes. Global fit criteria are also found to support the detection of measurement misspecification. Meta-analytic data from 31,264 online customers are used to show that the developed measurement predicts customer behavior better than widely used scales, such as WebQual and E-S-Qual. The results show that the new measurement enables managers to assess e-service quality more accurately and predict customer behavior more reliably.
Resumo:
Duarte et al. draw attention to the "embedding of liberal values and methods" in social psychological research. They note how these biases are often invisible to the researchers themselves. The authors themselves fall prey to these "invisible biases" by utilizing the five-factor model of personality and the trait of openness to experience as one possible explanation for the under-representation of political conservatives in social psychology. I show that the manner in which the trait of openness to experience is conceptualized and measured is a particularly blatant example of the very liberal bias the authors decry.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
According to law number 12.715/2012, Brazilian government instituted guidelines for a program named Inovar-Auto. In this context, energy efficiency is a survival requirement for Brazilian automotive industry from September 2016. As proposed by law, energy efficiency is not going to be calculated by models only. It is going to be calculated by the whole universe of new vehicles registered. In this scenario, the composition of vehicles sold in market will be a key factor on profits of each automaker. Energy efficiency and its consequences should be taken into consideration in all of its aspects. In this scenario, emerges the following question: which is the efficiency curve of one automaker for long term, allowing them to adequate to rules, keep balancing on investment in technologies, increasing energy efficiency without affecting competitiveness of product lineup? Among several variables to be considered, one can highlight the analysis of manufacturing costs, customer value perception and market share, which characterizes this problem as a multi-criteria decision-making. To tackle the energy efficiency problem required by legislation, this paper proposes a framework of multi-criteria decision-making. The proposed framework combines Delphi group and Analytic Hierarchy Process to identify suitable alternatives for automakers to incorporate in main Brazilian vehicle segments. A forecast model based on artificial neural networks was used to estimate vehicle sales demand to validate expected results. This approach is demonstrated with a real case study using public vehicles sales data of Brazilian automakers and public energy efficiency data.
Resumo:
Matrix factorization (MF) has evolved as one of the better practice to handle sparse data in field of recommender systems. Funk singular value decomposition (SVD) is a variant of MF that exists as state-of-the-art method that enabled winning the Netflix prize competition. The method is widely used with modifications in present day research in field of recommender systems. With the potential of data points to grow at very high velocity, it is prudent to devise newer methods that can handle such data accurately as well as efficiently than Funk-SVD in the context of recommender system. In view of the growing data points, I propose a latent factor model that caters to both accuracy and efficiency by reducing the number of latent features of either users or items making it less complex than Funk-SVD, where latent features of both users and items are equal and often larger. A comprehensive empirical evaluation of accuracy on two publicly available, amazon and ml-100 k datasets reveals the comparable accuracy and lesser complexity of proposed methods than Funk-SVD.