926 resultados para Dirichlet Regression compositional model.


Relevância:

100.00% 100.00%

Publicador:

Resumo:

A tunable radial basis function (RBF) network model is proposed for nonlinear system identification using particle swarm optimisation (PSO). At each stage of orthogonal forward regression (OFR) model construction, PSO optimises one RBF unit's centre vector and diagonal covariance matrix by minimising the leave-one-out (LOO) mean square error (MSE). This PSO aided OFR automatically determines how many tunable RBF nodes are sufficient for modelling. Compared with the-state-of-the-art local regularisation assisted orthogonal least squares algorithm based on the LOO MSE criterion for constructing fixed-node RBF network models, the PSO tuned RBF model construction produces more parsimonious RBF models with better generalisation performance and is computationally more efficient.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The purpose of this article is to present a new method to predict the response variable of an observation in a new cluster for a multilevel logistic regression. The central idea is based on the empirical best estimator for the random effect. Two estimation methods for multilevel model are compared: penalized quasi-likelihood and Gauss-Hermite quadrature. The performance measures for the prediction of the probability for a new cluster observation of the multilevel logistic model in comparison with the usual logistic model are examined through simulations and an application.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Mature weight breeding values were estimated using a multi-trait animal model (MM) and a random regression animal model (RRM). Data consisted of 82 064 weight records from 8 145 animals, recorded from birth to eight years of age. Weights at standard ages were considered in the MM. All models included contemporary groups as fixed effects, and age of dam (linear and quadratic effects) and animal age as covariates. In the RRM, mean trends were modelled through a cubic regression on orthogonal polynomials of animal age and genetic maternal and direct and maternal permanent environmental effects were also included as random. Legendre polynomials of orders 4, 3, 6 and 3 were used for animal and maternal genetic and permanent environmental effects, respectively, considering five classes of residual variances. Mature weight (five years) direct heritability estimates were 0.35 (MM) and 0.38 (RRM). Rank correlation between sires' breeding values estimated by MM and RRM was 0.82. However, selecting the top 2% (12) or 10% (62) of the young sires based on the MM predicted breeding values, respectively 71% and 80% of the same sires would be selected if RRM estimates were used instead. The RRM modelled the changes in the (co)variances with age adequately and larger breeding value accuracies can be expected using this model. © South African Society for Animal Science.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis intends to investigate two aspects of Constraint Handling Rules (CHR). It proposes a compositional semantics and a technique for program transformation. CHR is a concurrent committed-choice constraint logic programming language consisting of guarded rules, which transform multi-sets of atomic formulas (constraints) into simpler ones until exhaustion [Frü06] and it belongs to the declarative languages family. It was initially designed for writing constraint solvers but it has recently also proven to be a general purpose language, being as it is Turing equivalent [SSD05a]. Compositionality is the first CHR aspect to be considered. A trace based compositional semantics for CHR was previously defined in [DGM05]. The reference operational semantics for such a compositional model was the original operational semantics for CHR which, due to the propagation rule, admits trivial non-termination. In this thesis we extend the work of [DGM05] by introducing a more refined trace based compositional semantics which also includes the history. The use of history is a well-known technique in CHR which permits us to trace the application of propagation rules and consequently it permits trivial non-termination avoidance [Abd97, DSGdlBH04]. Naturally, the reference operational semantics, of our new compositional one, uses history to avoid trivial non-termination too. Program transformation is the second CHR aspect to be considered, with particular regard to the unfolding technique. Said technique is an appealing approach which allows us to optimize a given program and in more detail to improve run-time efficiency or spaceconsumption. Essentially it consists of a sequence of syntactic program manipulations which preserve a kind of semantic equivalence called qualified answer [Frü98], between the original program and the transformed ones. The unfolding technique is one of the basic operations which is used by most program transformation systems. It consists in the replacement of a procedure-call by its definition. In CHR every conjunction of constraints can be considered as a procedure-call, every CHR rule can be considered as a procedure and the body of said rule represents the definition of the call. While there is a large body of literature on transformation and unfolding of sequential programs, very few papers have addressed this issue for concurrent languages. We define an unfolding rule, show its correctness and discuss some conditions in which it can be used to delete an unfolded rule while preserving the meaning of the original program. Finally, confluence and termination maintenance between the original and transformed programs are shown. This thesis is organized in the following manner. Chapter 1 gives some general notion about CHR. Section 1.1 outlines the history of programming languages with particular attention to CHR and related languages. Then, Section 1.2 introduces CHR using examples. Section 1.3 gives some preliminaries which will be used during the thesis. Subsequentely, Section 1.4 introduces the syntax and the operational and declarative semantics for the first CHR language proposed. Finally, the methodologies to solve the problem of trivial non-termination related to propagation rules are discussed in Section 1.5. Chapter 2 introduces a compositional semantics for CHR where the propagation rules are considered. In particular, Section 2.1 contains the definition of the semantics. Hence, Section 2.2 presents the compositionality results. Afterwards Section 2.3 expounds upon the correctness results. Chapter 3 presents a particular program transformation known as unfolding. This transformation needs a particular syntax called annotated which is introduced in Section 3.1 and its related modified operational semantics !0t is presented in Section 3.2. Subsequently, Section 3.3 defines the unfolding rule and prove its correctness. Then, in Section 3.4 the problems related to the replacement of a rule by its unfolded version are discussed and this in turn gives a correctness condition which holds for a specific class of rules. Section 3.5 proves that confluence and termination are preserved by the program modifications introduced. Finally, Chapter 4 concludes by discussing related works and directions for future work.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Generalized linear mixed models with semiparametric random effects are useful in a wide variety of Bayesian applications. When the random effects arise from a mixture of Dirichlet process (MDP) model, normal base measures and Gibbs sampling procedures based on the Pólya urn scheme are often used to simulate posterior draws. These algorithms are applicable in the conjugate case when (for a normal base measure) the likelihood is normal. In the non-conjugate case, the algorithms proposed by MacEachern and Müller (1998) and Neal (2000) are often applied to generate posterior samples. Some common problems associated with simulation algorithms for non-conjugate MDP models include convergence and mixing difficulties. This paper proposes an algorithm based on the Pólya urn scheme that extends the Gibbs sampling algorithms to non-conjugate models with normal base measures and exponential family likelihoods. The algorithm proceeds by making Laplace approximations to the likelihood function, thereby reducing the procedure to that of conjugate normal MDP models. To ensure the validity of the stationary distribution in the non-conjugate case, the proposals are accepted or rejected by a Metropolis-Hastings step. In the special case where the data are normally distributed, the algorithm is identical to the Gibbs sampler.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Enabling real end-user programming development is the next logical stage in the evolution of Internetwide service-based applications. Even so, the vision of end users programming their own web-based solutions has not yet materialized. This will continue to be so unless both industry and the research community rise to the ambitious challenge of devising an end-to-end compositional model for developing a new age of end-user web application development tools. This paper describes a new composition model designed to empower programming-illiterate end users to create and share their own off-the-shelf rich Internet applications in a fully visual fashion. This paper presents the main insights and outcomes of our research and development efforts as part of a number of successful European Union research projects. A framework implementing this model was developed as part of the European Seventh Framework Programme FAST Project and the Spanish EzWeb Project and allowed us to validate the rationale behind our approach.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Operator Choice Model (OCM) was developed to model the behaviour of operators attending to complex tasks involving interdependent concurrent activities, such as in Air Traffic Control (ATC). The purpose of the OCM is to provide a flexible framework for modelling and simulation that can be used for quantitative analyses in human reliability assessment, comparison between human computer interaction (HCI) designs, and analysis of operator workload. The OCM virtual operator is essentially a cycle of four processes: Scan Classify Decide Action Perform Action. Once a cycle is complete, the operator will return to the Scan process. It is also possible to truncate a cycle and return to Scan after each of the processes. These processes are described using Continuous Time Probabilistic Automata (CTPA). The details of the probability and timing models are specific to the domain of application, and need to be specified using domain experts. We are building an application of the OCM for use in ATC. In order to develop a realistic model we are calibrating the probability and timing models that comprise each process using experimental data from a series of experiments conducted with student subjects. These experiments have identified the factors that influence perception and decision making in simplified conflict detection and resolution tasks. This paper presents an application of the OCM approach to a simple ATC conflict detection experiment. The aim is to calibrate the OCM so that its behaviour resembles that of the experimental subjects when it is challenged with the same task. Its behaviour should also interpolate when challenged with scenarios similar to those used to calibrate it. The approach illustrated here uses logistic regression to model the classifications made by the subjects. This model is fitted to the calibration data, and provides an extrapolation to classifications in scenarios outside of the calibration data. A simple strategy is used to calibrate the timing component of the model, and the results for reaction times are compared between the OCM and the student subjects. While this approach to timing does not capture the full complexity of the reaction time distribution seen in the data from the student subjects, the mean and the tail of the distributions are similar.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

To evaluate factors associated with hypertension in Brazilian women of 50 years of age or more. A cross-sectional population based study using self-reports. A total of 622 women were included. The association between sociodemographic, clinical and behavioral factors and the woman's age at the onset of hypertension was evaluated. Data were analyzed according to cumulative continuation rates without hypertension, using the life-table method and considering annual intervals. Next, a Cox multiple regression analysis model was adjusted to analyze the occurrence rates of hypertension according to various predictor variables. Significance level was pre-established at 5% (95% confidence level) and the sampling plan (primary sampling unit) was taken into consideration. Median age at onset of hypertension was 64.3 years. Cumulative continuation rate without hypertension at 90 years was 20%. Higher body mass index (BMI) at 20-30 years of age was associated with a higher cumulative occurrence rate of hypertension over time (coefficient=0.078; p<0.001). Being white was associated with a lower cumulative occurrence rate of hypertension over time (coefficient= -0.439; p=0.003), while smoking >15 cigarettes/day was associated with a higher rate over time (coefficient=0.485; p=0.004). The results of the present study highlight the importance of weight control in young adulthood and of avoiding smoking in preventing hypertension in women aged ≥50 years.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

OBJETIVO: Avaliar a efetividade de programa governamental de suplementação alimentar no ganho ponderal de crianças. MÉTODOS: Estudo de coorte com dados secundários de 25.433 crianças de baixa renda com idade entre seis e 24 meses que ingressaram em programa de distribuição de leite fortificado Projeto Vivaleite, realizado no Estado de São Paulo de 2003 a 2008. O ganho ponderal foi medido por meio dos valores de escores z de peso para idade, calculados pelo padrão da Organização Mundial da Saúde (2007), obtidos, na rotina do programa, ao ingressar e a cada quatro meses durante a permanência. As crianças foram divididas em três grupos de escore z ao entrar: sem comprometimento de peso (z > -1); risco de baixo peso (-2 < z < -1) e baixo peso (z < -2). Utilizou-se regressão linear multinível (modelo misto), permitindo a comparação, em cada idade, das médias ajustadas do escore z dos ingressantes e participantes há pelo menos quatro meses, ajustadas para correlação entre medidas repetidas. RESULTADOS: Verificou-se efeito positivo do programa no ganho de peso das crianças, variando em função do estado nutricional ao ingressar; para as que entraram sem comprometimento de peso, o ganho médio ajustado foi 0,183 escore z;entre as que entraram com risco de baixo peso, foi 0,566; e entre as ingressantes com baixo peso, foi 1,005 escore z. CONCLUSÕES: O programa é efetivo para o ganho ponderal de crianças menores de dois anos, com efeito mais pronunciado entre as crianças que entram no programa em condições menos favoráveis de peso.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Pires, FO, Hammond, J, Lima-Silva, AE, Bertuzzi, RCM, and Kiss, MAPDM. Ventilation behavior during upper-body incremental exercise. J Strength Cond Res 25(1): 225-230, 2011-This study tested the ventilation (V(E)) behavior during upper-body incremental exercise by mathematical models that calculate 1 or 2 thresholds and compared the thresholds identified by mathematical models with V-slope, ventilatory equivalent for oxygen uptake (V(E)/(V) over dotO(2)), and ventilatory equivalent for carbon dioxide uptake (V(E)/(V) over dotCO(2)). Fourteen rock climbers underwent an upper-body incremental test on a cycle ergometer with increases of approximately 20 W.min(-1) until exhaustion at a cranking frequency of approximately 90 rpm. The V(E) data were smoothed to 10-second averages for V(E) time plotting. The bisegmental and the 3-segmental linear regression models were calculated from 1 or 2 intercepts that best shared the V(E) curve in 2 or 3 linear segments. The ventilatory threshold(s) was determined mathematically by the intercept(s) obtained by bisegmental and 3-segmental models, by V-slope model, or visually by V(E)/(V) over dotO(2) and V(E)/(V) over dotCO(2). There was no difference between bisegmental (mean square error [MSE] = 35.3 +/- 32.7 l.min(-1)) and 3-segmental (MSE = 44.9 +/- 47.8 l.min(-1)) models in fitted data. There was no difference between ventilatory threshold identified by the bisegmental (28.2 +/- 6.8 ml.kg(-1).min(-1)) and second ventilatory threshold identified by the 3-segmental (30.0 +/- 5.1 ml.kg(-1).min(-1)), V(E)/(V) over dotO(2) (28.8 +/- 5.5 ml.kg(-1).min(-1)), or V-slope (28.5 +/- 5.6 ml.kg(-1).min(-1)). However, the first ventilatory threshold identified by 3-segmental (23.1 +/- 4.9 ml.kg(-1).min(-1)) or by VE/(V) over dotO(2) (24.9 +/- 4.4 ml.kg(-1).min(-1)) was different from these 4. The V(E) behavior during upper-body exercise tends to show only 1 ventilatory threshold. These findings have practical implications because this point is frequently used for aerobic training prescription in healthy subjects, athletes, and in elderly or diseased populations. The ventilatory threshold identified by V(E) curve should be used for aerobic training prescription in healthy subjects and athletes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Chemical interesterification is an important technological option for the production of fats targeting commercial applications. Fat blends, formulated by binary blends of palm stearin and palm olein in different ratios, were subjected to chemical interesterification. The following determinations, before and after the interesterification reactions, were done: fatty acid composition, softening point, melting point, solid fat content and consistency. For the analytical responses a multiple regression statistical model was applied. This study has shown that blending and chemical interesterifications are an effective way to modify the physical and chemical properties of palm stearin, palm olein and their blends. The mixture and chemical interesterification allowed obtaining fats with various degrees of plasticity, increasing the possibilities for the commercial use of palm stearin and palm olein. (C) 2009 Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Em geral, produtos agrícolas são produzidos em larga escala e essa produtividade cresce proporcionalmente ao seu consumo. Entretanto, outro fator também cresce de forma proporcional, as perdas pós-colheita, o que sugere a utilização de tecnologias para aumentar a utilização desses produtos mitigando o desperdício e aumentando sua a vida de prateleira. Além disso, oferecer o produto durante o período de entressafra. No presente trabalho, foi utilizado à tecnologia de secagem em leito de espuma aplicada a cenoura, beterraba, tomate e morango, produtos amplamente produzidos e consumidos no Brasil. Neste trabalho, os quatros produtos foram submetidos à secagem em leito de espuma em secador com ar circulado em temperaturas controladas de 40, 50, 60, 70 e 80 °C. A descrição da cinética de secagem foi realizada pelo ajuste de modelos matemáticos para cada temperatura do ar de secagem. Além disso, foi proposto um modelo matemático generalizado ajustado por regressão não linear. O modelo de Page obteve o melhor ajuste sobre os dados de secagem em todos os produtos testados, com um coeficiente de determinação (R²) superior a 98% em todas as temperaturas avaliadas. Além disso, foi possível modelar a influência da temperatura do ar sobre o parâmetro k do modelo de Page através da utilização de um modelo exponencial. O coeficiente de difusão efetiva aumentou com a elevação da temperatura, apresentando valores entre 10-8e 10-7 m².s-¹ para as temperaturas de processo. A relação entre o coeficiente de difusão efetiva e a temperatura de secagem pôde ser descrita pela equação de Arrhenius.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

There is a general consensus that in a competitive business environment, firms’ performance will depend on their capacity to innovate. To clarifying how, when and to what extent innovation affects the market and financial performance of firms, the authors deploy seemingly unrelated regression equation model to examine innovation in over 500 Portuguese firms from 1998 to 2004. The results confirm, as theorists have frequently assumed, that innovation positively affects firms’ performance; but they also suggest that the reverse is true, a result that is less intuitively obvious, given the complexity of the innovation process and local, national and global competitive environments.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The development of high spatial resolution airborne and spaceborne sensors has improved the capability of ground-based data collection in the fields of agriculture, geography, geology, mineral identification, detection [2, 3], and classification [4–8]. The signal read by the sensor from a given spatial element of resolution and at a given spectral band is a mixing of components originated by the constituent substances, termed endmembers, located at that element of resolution. This chapter addresses hyperspectral unmixing, which is the decomposition of the pixel spectra into a collection of constituent spectra, or spectral signatures, and their corresponding fractional abundances indicating the proportion of each endmember present in the pixel [9, 10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. The linear mixing model holds when the mixing scale is macroscopic [13]. The nonlinear model holds when the mixing scale is microscopic (i.e., intimate mixtures) [14, 15]. The linear model assumes negligible interaction among distinct endmembers [16, 17]. The nonlinear model assumes that incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [18]. Under the linear mixing model and assuming that the number of endmembers and their spectral signatures are known, hyperspectral unmixing is a linear problem, which can be addressed, for example, under the maximum likelihood setup [19], the constrained least-squares approach [20], the spectral signature matching [21], the spectral angle mapper [22], and the subspace projection methods [20, 23, 24]. Orthogonal subspace projection [23] reduces the data dimensionality, suppresses undesired spectral signatures, and detects the presence of a spectral signature of interest. The basic concept is to project each pixel onto a subspace that is orthogonal to the undesired signatures. As shown in Settle [19], the orthogonal subspace projection technique is equivalent to the maximum likelihood estimator. This projection technique was extended by three unconstrained least-squares approaches [24] (signature space orthogonal projection, oblique subspace projection, target signature space orthogonal projection). Other works using maximum a posteriori probability (MAP) framework [25] and projection pursuit [26, 27] have also been applied to hyperspectral data. In most cases the number of endmembers and their signatures are not known. Independent component analysis (ICA) is an unsupervised source separation process that has been applied with success to blind source separation, to feature extraction, and to unsupervised recognition [28, 29]. ICA consists in finding a linear decomposition of observed data yielding statistically independent components. Given that hyperspectral data are, in given circumstances, linear mixtures, ICA comes to mind as a possible tool to unmix this class of data. In fact, the application of ICA to hyperspectral data has been proposed in reference 30, where endmember signatures are treated as sources and the mixing matrix is composed by the abundance fractions, and in references 9, 25, and 31–38, where sources are the abundance fractions of each endmember. In the first approach, we face two problems: (1) The number of samples are limited to the number of channels and (2) the process of pixel selection, playing the role of mixed sources, is not straightforward. In the second approach, ICA is based on the assumption of mutually independent sources, which is not the case of hyperspectral data, since the sum of the abundance fractions is constant, implying dependence among abundances. This dependence compromises ICA applicability to hyperspectral images. In addition, hyperspectral data are immersed in noise, which degrades the ICA performance. IFA [39] was introduced as a method for recovering independent hidden sources from their observed noisy mixtures. IFA implements two steps. First, source densities and noise covariance are estimated from the observed data by maximum likelihood. Second, sources are reconstructed by an optimal nonlinear estimator. Although IFA is a well-suited technique to unmix independent sources under noisy observations, the dependence among abundance fractions in hyperspectral imagery compromises, as in the ICA case, the IFA performance. Considering the linear mixing model, hyperspectral observations are in a simplex whose vertices correspond to the endmembers. Several approaches [40–43] have exploited this geometric feature of hyperspectral mixtures [42]. Minimum volume transform (MVT) algorithm [43] determines the simplex of minimum volume containing the data. The MVT-type approaches are complex from the computational point of view. Usually, these algorithms first find the convex hull defined by the observed data and then fit a minimum volume simplex to it. Aiming at a lower computational complexity, some algorithms such as the vertex component analysis (VCA) [44], the pixel purity index (PPI) [42], and the N-FINDR [45] still find the minimum volume simplex containing the data cloud, but they assume the presence in the data of at least one pure pixel of each endmember. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. Hyperspectral sensors collects spatial images over many narrow contiguous bands, yielding large amounts of data. For this reason, very often, the processing of hyperspectral data, included unmixing, is preceded by a dimensionality reduction step to reduce computational complexity and to improve the signal-to-noise ratio (SNR). Principal component analysis (PCA) [46], maximum noise fraction (MNF) [47], and singular value decomposition (SVD) [48] are three well-known projection techniques widely used in remote sensing in general and in unmixing in particular. The newly introduced method [49] exploits the structure of hyperspectral mixtures, namely the fact that spectral vectors are nonnegative. The computational complexity associated with these techniques is an obstacle to real-time implementations. To overcome this problem, band selection [50] and non-statistical [51] algorithms have been introduced. This chapter addresses hyperspectral data source dependence and its impact on ICA and IFA performances. The study consider simulated and real data and is based on mutual information minimization. Hyperspectral observations are described by a generative model. This model takes into account the degradation mechanisms normally found in hyperspectral applications—namely, signature variability [52–54], abundance constraints, topography modulation, and system noise. The computation of mutual information is based on fitting mixtures of Gaussians (MOG) to data. The MOG parameters (number of components, means, covariances, and weights) are inferred using the minimum description length (MDL) based algorithm [55]. We study the behavior of the mutual information as a function of the unmixing matrix. The conclusion is that the unmixing matrix minimizing the mutual information might be very far from the true one. Nevertheless, some abundance fractions might be well separated, mainly in the presence of strong signature variability, a large number of endmembers, and high SNR. We end this chapter by sketching a new methodology to blindly unmix hyperspectral data, where abundance fractions are modeled as a mixture of Dirichlet sources. This model enforces positivity and constant sum sources (full additivity) constraints. The mixing matrix is inferred by an expectation-maximization (EM)-type algorithm. This approach is in the vein of references 39 and 56, replacing independent sources represented by MOG with mixture of Dirichlet sources. Compared with the geometric-based approaches, the advantage of this model is that there is no need to have pure pixels in the observations. The chapter is organized as follows. Section 6.2 presents a spectral radiance model and formulates the spectral unmixing as a linear problem accounting for abundance constraints, signature variability, topography modulation, and system noise. Section 6.3 presents a brief resume of ICA and IFA algorithms. Section 6.4 illustrates the performance of IFA and of some well-known ICA algorithms with experimental data. Section 6.5 studies the ICA and IFA limitations in unmixing hyperspectral data. Section 6.6 presents results of ICA based on real data. Section 6.7 describes the new blind unmixing scheme and some illustrative examples. Section 6.8 concludes with some remarks.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

INTRODUCTION AND AIMS: Adult orthotopic liver transplantation (OLT) is associated with considerable blood product requirements. The aim of this study was to assess the ability of preoperative information to predict intraoperative red blood cell (RBC) transfusion requirements among adult liver recipients. METHODS: Preoperative variables with previously demonstrated relationships to intraoperative RBC transfusion were identified from the literature: sex, age, pathology, prothrombin time (PT), factor V, hemoglobin (Hb), and platelet count (plt). These variables were then retrospectively collected from 758 consecutive adult patients undergoing OLT from 1997 to 2007. Relationships between these variables and intraoperative blood transfusion requirements were examined by both univariate analysis and multiple linear regression analysis. RESULTS: Univariate analysis confirmed significant associations between RBC transfusion and PT, factor V, Hb, Plt, pathology, and age (P values all < .001). However, stepwise backward multivariate analysis excluded variables Plt and factor V from the multiple regression linear model. The variables included in the final predictive model were PT, Hb, age, and pathology. Patients suffering from liver carcinoma required more blood products than those suffering from other pathologies. Yet, the overall predictive power of the final model was limited (R(2) = .308; adjusted R(2) = .30). CONCLUSION: Preoperative variables have limited predictive power for intraoperative RBC transfusion requirements even when significant statistical associations exist, identifying only a small portion of the observed total transfusion variability. Preoperative PT, Hb, age, and liver pathology seem to be the most significant predictive factors but other factors like severity of liver disease, surgical technique, medical experience in liver transplantation, and other noncontrollable human variables may play important roles to determine the final transfusion requirements.