941 resultados para Log-linear model


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Ussing [1] considered the steady flux of a single chemical component diffusing through a membrane under the influence of chemical potentials and derived from his linear model, an expression for the ratio of this flux and that of the complementary experiment in which the boundary conditions were interchanged. Here, an extension of Ussing's flux ratio theorem is obtained for n chemically interacting components governed by a linear system of diffusion-migration equations that may also incorporate linear temporary trapping reactions. The determinants of the output flux matrices for complementary experiments are shown to satisfy an Ussing flux ratio formula for steady state conditions of the same form as for the well-known one-component case. (C) 2000 Elsevier Science Ltd. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Two methods were compared for determining the concentration of penetrative biomass during growth of Rhizopus oligosporus on an artificial solid substrate consisting of an inert gel and starch as the sole source of carbon and energy. The first method was based on the use of a hand microtome to make sections of approximately 0.2- to 0.4-mm thickness parallel to the substrate surface and the determination of the glucosamine content in each slice. Use of glucosamine measurements to estimate biomass concentrations was shown to be problematic due to the large variations in glucosamine content with mycelial age. The second method was a novel method based on the use of confocal scanning laser microscopy to estimate the fractional volume occupied by the biomass. Although it is not simple to translate fractional volumes into dry weights of hyphae due to the lack of experimentally determined conversion factors, measurement of the fractional volumes in themselves is useful for characterizing fungal penetration into the substrate. Growth of penetrative biomass in the artificial model substrate showed two forms of growth with an indistinct mass in the region close to the substrate surface and a few hyphae penetrating perpendicularly to the surface in regions further away from the substrate surface. The biomass profiles against depth obtained from the confocal microscopy showed two linear regions on log-linear plots, which are possibly related to different oxygen availability at different depths within the substrate. Confocal microscopy has the potential to be a powerful tool in the investigation of fungal growth mechanisms in solid-state fermentation. (C) 2003 Wiley Periodicals, Inc.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Hyperspectral remote sensing exploits the electromagnetic scattering patterns of the different materials at specific wavelengths [2, 3]. Hyperspectral sensors have been developed to sample the scattered portion of the electromagnetic spectrum extending from the visible region through the near-infrared and mid-infrared, in hundreds of narrow contiguous bands [4, 5]. The number and variety of potential civilian and military applications of hyperspectral remote sensing is enormous [6, 7]. Very often, the resolution cell corresponding to a single pixel in an image contains several substances (endmembers) [4]. In this situation, the scattered energy is a mixing of the endmember spectra. A challenging task underlying many hyperspectral imagery applications is then decomposing a mixed pixel into a collection of reflectance spectra, called endmember signatures, and the corresponding abundance fractions [8–10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. Linear mixing model holds approximately when the mixing scale is macroscopic [13] and there is negligible interaction among distinct endmembers [3, 14]. If, however, the mixing scale is microscopic (or intimate mixtures) [15, 16] and the incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [17], the linear model is no longer accurate. Linear spectral unmixing has been intensively researched in the last years [9, 10, 12, 18–21]. It considers that a mixed pixel is a linear combination of endmember signatures weighted by the correspondent abundance fractions. Under this model, and assuming that the number of substances and their reflectance spectra are known, hyperspectral unmixing is a linear problem for which many solutions have been proposed (e.g., maximum likelihood estimation [8], spectral signature matching [22], spectral angle mapper [23], subspace projection methods [24,25], and constrained least squares [26]). In most cases, the number of substances and their reflectances are not known and, then, hyperspectral unmixing falls into the class of blind source separation problems [27]. Independent component analysis (ICA) has recently been proposed as a tool to blindly unmix hyperspectral data [28–31]. ICA is based on the assumption of mutually independent sources (abundance fractions), which is not the case of hyperspectral data, since the sum of abundance fractions is constant, implying statistical dependence among them. This dependence compromises ICA applicability to hyperspectral images as shown in Refs. [21, 32]. In fact, ICA finds the endmember signatures by multiplying the spectral vectors with an unmixing matrix, which minimizes the mutual information among sources. If sources are independent, ICA provides the correct unmixing, since the minimum of the mutual information is obtained only when sources are independent. This is no longer true for dependent abundance fractions. Nevertheless, some endmembers may be approximately unmixed. These aspects are addressed in Ref. [33]. Under the linear mixing model, the observations from a scene are in a simplex whose vertices correspond to the endmembers. Several approaches [34–36] have exploited this geometric feature of hyperspectral mixtures [35]. Minimum volume transform (MVT) algorithm [36] determines the simplex of minimum volume containing the data. The method presented in Ref. [37] is also of MVT type but, by introducing the notion of bundles, it takes into account the endmember variability usually present in hyperspectral mixtures. The MVT type approaches are complex from the computational point of view. Usually, these algorithms find in the first place the convex hull defined by the observed data and then fit a minimum volume simplex to it. For example, the gift wrapping algorithm [38] computes the convex hull of n data points in a d-dimensional space with a computational complexity of O(nbd=2cþ1), where bxc is the highest integer lower or equal than x and n is the number of samples. The complexity of the method presented in Ref. [37] is even higher, since the temperature of the simulated annealing algorithm used shall follow a log( ) law [39] to assure convergence (in probability) to the desired solution. Aiming at a lower computational complexity, some algorithms such as the pixel purity index (PPI) [35] and the N-FINDR [40] still find the minimum volume simplex containing the data cloud, but they assume the presence of at least one pure pixel of each endmember in the data. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. PPI algorithm uses the minimum noise fraction (MNF) [41] as a preprocessing step to reduce dimensionality and to improve the signal-to-noise ratio (SNR). The algorithm then projects every spectral vector onto skewers (large number of random vectors) [35, 42,43]. The points corresponding to extremes, for each skewer direction, are stored. A cumulative account records the number of times each pixel (i.e., a given spectral vector) is found to be an extreme. The pixels with the highest scores are the purest ones. N-FINDR algorithm [40] is based on the fact that in p spectral dimensions, the p-volume defined by a simplex formed by the purest pixels is larger than any other volume defined by any other combination of pixels. This algorithm finds the set of pixels defining the largest volume by inflating a simplex inside the data. ORA SIS [44, 45] is a hyperspectral framework developed by the U.S. Naval Research Laboratory consisting of several algorithms organized in six modules: exemplar selector, adaptative learner, demixer, knowledge base or spectral library, and spatial postrocessor. The first step consists in flat-fielding the spectra. Next, the exemplar selection module is used to select spectral vectors that best represent the smaller convex cone containing the data. The other pixels are rejected when the spectral angle distance (SAD) is less than a given thresh old. The procedure finds the basis for a subspace of a lower dimension using a modified Gram–Schmidt orthogonalizati on. The selected vectors are then projected onto this subspace and a simplex is found by an MV T pro cess. ORA SIS is oriented to real-time target detection from uncrewed air vehicles using hyperspectral data [46]. In this chapter we develop a new algorithm to unmix linear mixtures of endmember spectra. First, the algorithm determines the number of endmembers and the signal subspace using a newly developed concept [47, 48]. Second, the algorithm extracts the most pure pixels present in the data. Unlike other methods, this algorithm is completely automatic and unsupervised. To estimate the number of endmembers and the signal subspace in hyperspectral linear mixtures, the proposed scheme begins by estimating sign al and noise correlation matrices. The latter is based on multiple regression theory. The signal subspace is then identified by selectin g the set of signal eigenvalue s that best represents the data, in the least-square sense [48,49 ], we note, however, that VCA works with projected and with unprojected data. The extraction of the end members exploits two facts: (1) the endmembers are the vertices of a simplex and (2) the affine transformation of a simplex is also a simplex. As PPI and N-FIND R algorithms, VCA also assumes the presence of pure pixels in the data. The algorithm iteratively projects data on to a direction orthogonal to the subspace spanned by the endmembers already determined. The new end member signature corresponds to the extreme of the projection. The algorithm iterates until all end members are exhausted. VCA performs much better than PPI and better than or comparable to N-FI NDR; yet it has a computational complexity between on e and two orders of magnitude lower than N-FINDR. The chapter is structure d as follows. Section 19.2 describes the fundamentals of the proposed method. Section 19.3 and Section 19.4 evaluate the proposed algorithm using simulated and real data, respectively. Section 19.5 presents some concluding remarks.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Os modelos log-lineares permitem enriquecer bastante a análise e interpretação das tabelas de contingência. Embora a nível teórico a sua importância já tenha sido reconhecida há bastante tempo, a nível da sua aplicação prática só há relativamente pouco tempo é que foi reconhecida, devido, sobretudo, às dificuldades de cálculo que lhe são inerentes e que só se resolveram completamente com o desenvolvimento dos computadores e do software adequado. Neste trabalho apresentam-se os métodos básicos da análise log-linear de tabelas de contingência bidimensionais e tridimensionais.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Thesis submitted in partial fulfillment of the requirements for the Degree of Doctor of Statistics and Information Management

Relevância:

90.00% 90.00%

Publicador:

Resumo:

RESUMO - Caracterização do problema: O sistema de saúde português atingiu um patamar de ineficiência tal que urge ser reestruturado de forma a torná-lo sustentável. De forma a atingir este nível de sustentabilidade, uma série de soluções podem ser consideradas das quais destacamos a integração de cuidados. Este conceito exige que os diferentes níveis de saúde sigam um único caminho, trabalhando de forma coordenada e contínua. A integração de cuidados pode ser implementada através de várias tipologias entre as quais se destaca a integração clínica que por sua vez é composta pela continuidade de cuidados. Assim, ao medir a continuidade de cuidados, quantifica-se de certa forma a integração de cuidados. Objetivos: Avaliar o impacto da continuidade de cuidados nos custos. Metodologia: Os dados foram analisados através de estatísticas descritivas para verificar o seu grau de normalidade. Posteriormente foram aplicados testes t-student para analisar a existência de diferenças estatisticamente significativas entre as médias das diferentes variáveis. Foi então estudado o grau de associação entre variáveis através da correlação de spearman. Por fim, foi utilizado o modelo de regressão log-linear para verificar a existência de uma relação entre as várias naturezas de custos e os índices de continuidade. Com base neste modelo foram simulados dois cenários para estimar o impacto da maximização da continuidade de cuidados nas várias naturezas de custos. Conclusões: No geral, verifica-se uma relação muito ligeira entre a continuidade de cuidados e os custos. Mais especificamente, uma relação mais duradoura entre o médico e o doente resulta numa poupança de custos, independentemente da tipologia. Analisando a densidade da relação, observa-se uma relação positiva entre a mesma e os custos totais e o custo com Meios Complementares de Diagnóstico e Terapêutica (MCDT). Contudo verifica-se uma relação médico-doente negativa entre a densidade e os custos com medicamentos e com pessoal. Ao analisar o impacto da continuidade de cuidados nos custos, conclui-se que apenas a duração da relação médico-doente tem um impacto negativo em todas as categorias de custos, exceto o custo com medicamentos. A densidade de cuidados tem um impacto negativo apenas no custo com pessoal, influenciando positivamente as outras categorias de custos. Extrapolando para o nível nacional se o nível de densidade de uma relação fosse maximizado, existiria uma poupança de 0,18 euros, por ano, em custos com pessoal.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper reports on one of the first empirical attempts to investigate small firm growth and survival, and their determinants, in the Peoples’ Republic of China. The work is based on field work evidence gathered from a sample of 83 Chinese private firms (mainly SMEs) collected initially by face-to-face interviews, and subsequently by follow-up telephone interviews a year later. We extend the models of Gibrat (1931) and Jovanovic (1982), which traditionally focus on size and age alone (e.g. Brock and Evans, 1986), to a ‘comprehensive’ growth model with two types of additional explanatory variables: firm-specific (e.g. business planning); and environmental (e.g. choice of location). We estimate two econometric models: a ‘basic’ age-size-growth model; and a ‘comprehensive’ growth model, using Heckman’s two-step regression procedure. Estimation is by log-linear regression on cross-section data, with corrections for sample selection bias and heteroskedasticity. Our results refute a pure Gibrat model (but support a more general variant) and support the learning model, as regards the consequences of size and age for growth; and our extension to a comprehensive model highlights the importance of location choice and customer orientation for the growth of Chinese private firms. In the latter model, growth is explained by variables like planning, R&D orientation, market competition, elasticity of demand etc. as well as by control variables. Our work on small firm growth achieves two things. First, it upholds the validity of ‘basic’ size-age-growth models, and successfully applies them to the Chinese economy. Second, it extends the compass of such models to a ‘comprehensive’ growth model incorporating firm-specific and environmental variables.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This study addresses the issue of the presence of a unit root on the growth rate estimation by the least-squares approach. We argue that when the log of a variable contains a unit root, i.e., it is not stationary then the growth rate estimate from the log-linear trend model is not a valid representation of the actual growth of the series. In fact, under such a situation, we show that the growth of the series is the cumulative impact of a stochastic process. As such the growth estimate from such a model is just a spurious representation of the actual growth of the series, which we refer to as a “pseudo growth rate”. Hence such an estimate should be interpreted with caution. On the other hand, we highlight that the statistical representation of a series as containing a unit root is not easy to separate from an alternative description which represents the series as fundamentally deterministic (no unit root) but containing a structural break. In search of a way around this, our study presents a survey of both the theoretical and empirical literature on unit root tests that takes into account possible structural breaks. We show that when a series is trendstationary with breaks, it is possible to use the log-linear trend model to obtain well defined estimates of growth rates for sub-periods which are valid representations of the actual growth of the series. Finally, to highlight the above issues, we carry out an empirical application whereby we estimate meaningful growth rates of real wages per worker for 51 industries from the organised manufacturing sector in India for the period 1973-2003, which are not only unbiased but also asymptotically efficient. We use these growth rate estimates to highlight the evolving inter-industry wage structure in India.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper reports on one of the first empirical attempts to investigate small firm growth and survival, and their determinants, in the Peoples’ Republic of China. The work is based on field work evidence gathered from a sample of 83 Chinese private firms (mainly SMEs) collected initially by face-to-face interviews, and subsequently by follow-up telephone interviews a year later. We extend the models of Gibrat (1931) and Jovanovic (1982), which traditionally focus on size and age alone (e.g. Brock and Evans, 1986), to a ‘comprehensive’ growth model with two types of additional explanatory variables: firm-specific (e.g. business planning); and environmental (e.g. choice of location). We estimate two econometric models: a ‘basic’ age-size-growth model; and a ‘comprehensive’ growth model, using Heckman’s two-step regression procedure. Estimation is by log-linear regression on cross-section data, with corrections for sample selection bias and heteroskedasticity. Our results refute a pure Gibrat model (but support a more general variant) and support the learning model, as regards the consequences of size and age for growth; and our extension to a comprehensive model highlights the importance of location choice and customer orientation for the growth of Chinese private firms. In the latter model, growth is explained by variables like planning, R&D orientation, market competition, elasticity of demand etc. as well as by control variables. Our work on small firm growth achieves two things. First, it upholds the validity of ‘basic’ size-age-growth models, and successfully applies them to the Chinese economy. Second, it extends the compass of such models to a ‘comprehensive’ growth model incorporating firm-specific and environmental variables.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

OBJECTIVE: The purpose of this study was to compare the use of different variables to measure the clinical wear of two denture tooth materials in two analysis centers. METHODS: Twelve edentulous patients were provided with full dentures. Two different denture tooth materials (experimental material and control) were placed randomly in accordance with the split-mouth design. For wear measurements, impressions were made after an adjustment phase of 1-2 weeks and after 6, 12, 18, and 24 months. The occlusal wear of the posterior denture teeth of 11 subjects was assessed in two study centers by use of plaster replicas and 3D laser-scanning methods. In both centers sequential scans of the occlusal surfaces were digitized and superimposed. Wear was described by use of four different variables. Statistical analysis was performed after log-transformation of the wear data by use of the Pearson and Lin correlation and by use of a mixed linear model. RESULTS: Mean occlusal vertical wear of the denture teeth after 24 months was between 120μm and 212μm, depending on wear variable and material. For three of the four variables, wear of the experimental material was statistically significantly less than that of the control. Comparison of the two study centers, however, revealed correlation of the wear variables was only moderate whereas strong correlation was observed among the different wear variables evaluated by each center. SIGNIFICANCE: Moderate correlation was observed for clinical wear measurements by optical 3D laser scanning in two different study centers. For the two denture tooth materials, wear measurements limited to the attrition zones led to the same qualitative assessment.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper introduces local distance-based generalized linear models. These models extend (weighted) distance-based linear models firstly with the generalized linear model concept, then by localizing. Distances between individuals are the only predictor information needed to fit these models. Therefore they are applicable to mixed (qualitative and quantitative) explanatory variables or when the regressor is of functional type. Models can be fitted and analysed with the R package dbstats, which implements several distancebased prediction methods.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Patterns of cigarette smoking in Switzerland were analyzed on the basis of sales data (available since 1924) and national health surveys conducted in the last decade. There was a steady and substantial increase in cigarettes sales up to the early 1970s. Thereafter, the curve tended to level off around an average value of 3,000 cigarettes per adult per year. According to the 1981-1983 National Health Survey, 37% of Swiss men were current smokers, 25% were ex-smokers, and 39% were never smokers. Corresponding porportions in women were 22, 11, and 67%. Among men, smoking prevalence was higher in lower social classes, and some moderate decline was apparent from survey data over the period 1975-1981 mostly in later middle-age. Trends in lung cancer death certification rates over the period 1950-1984 were analyzed using standard cross-sectional methods and a log-linear Poisson model to isolate the effects of age, birth cohort, and year of death. Mortality from lung cancer increased substantially among Swiss men between the early 1950s and the late 1970s, and levelled off (around a value of 70/100,000 men) thereafter. Among women, there has been a steady upward trend which started in the mid-1960s, and continues to climb steadily, although lung cancer mortality is still considerably lower in absolute terms (around 8/100,000 women) than in several North European countries or in North America. Cohort analyses indicate that the peak rates in men were reached by the generation born around 1910 and mortality stabilized for subsequent generations up to the 1930 birth cohort. Among females, marked increases were observed in each subsequent birth cohort. This pattern of trends is consistent with available information on smoking prevalence in successive generations, showing a peak among men for the 1910 cohort, but steady upward trends among females. Over the period 1980-1984, about 90% of lung cancer deaths among Swiss men and about 40% of those among women could be attributed to smoking (overall proportion, 85%).

Relevância:

90.00% 90.00%

Publicador:

Resumo:

An important statistical development of the last 30 years has been the advance in regression analysis provided by generalized linear models (GLMs) and generalized additive models (GAMs). Here we introduce a series of papers prepared within the framework of an international workshop entitled: Advances in GLMs/GAMs modeling: from species distribution to environmental management, held in Riederalp, Switzerland, 6-11 August 2001.We first discuss some general uses of statistical models in ecology, as well as provide a short review of several key examples of the use of GLMs and GAMs in ecological modeling efforts. We next present an overview of GLMs and GAMs, and discuss some of their related statistics used for predictor selection, model diagnostics, and evaluation. Included is a discussion of several new approaches applicable to GLMs and GAMs, such as ridge regression, an alternative to stepwise selection of predictors, and methods for the identification of interactions by a combined use of regression trees and several other approaches. We close with an overview of the papers and how we feel they advance our understanding of their application to ecological modeling.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The n-octanol/water partition coefficient (log Po/w) is a key physicochemical parameter for drug discovery, design, and development. Here, we present a physics-based approach that shows a strong linear correlation between the computed solvation free energy in implicit solvents and the experimental log Po/w on a cleansed data set of more than 17,500 molecules. After internal validation by five-fold cross-validation and data randomization, the predictive power of the most interesting multiple linear model, based on two GB/SA parameters solely, was tested on two different external sets of molecules. On the Martel druglike test set, the predictive power of the best model (N = 706, r = 0.64, MAE = 1.18, and RMSE = 1.40) is similar to six well-established empirical methods. On the 17-drug test set, our model outperformed all compared empirical methodologies (N = 17, r = 0.94, MAE = 0.38, and RMSE = 0.52). The physical basis of our original GB/SA approach together with its predictive capacity, computational efficiency (1 to 2 s per molecule), and tridimensional molecular graphics capability lay the foundations for a promising predictor, the implicit log P method (iLOGP), to complement the portfolio of drug design tools developed and provided by the SIB Swiss Institute of Bioinformatics.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Objective: Health status measures usually have an asymmetric distribution and present a highpercentage of respondents with the best possible score (ceiling effect), specially when they areassessed in the overall population. Different methods to model this type of variables have beenproposed that take into account the ceiling effect: the tobit models, the Censored Least AbsoluteDeviations (CLAD) models or the two-part models, among others. The objective of this workwas to describe the tobit model, and compare it with the Ordinary Least Squares (OLS) model,that ignores the ceiling effect.Methods: Two different data sets have been used in order to compare both models: a) real datacomming from the European Study of Mental Disorders (ESEMeD), in order to model theEQ5D index, one of the measures of utilities most commonly used for the evaluation of healthstatus; and b) data obtained from simulation. Cross-validation was used to compare thepredicted values of the tobit model and the OLS models. The following estimators werecompared: the percentage of absolute error (R1), the percentage of squared error (R2), the MeanSquared Error (MSE) and the Mean Absolute Prediction Error (MAPE). Different datasets werecreated for different values of the error variance and different percentages of individuals withceiling effect. The estimations of the coefficients, the percentage of explained variance and theplots of residuals versus predicted values obtained under each model were compared.Results: With regard to the results of the ESEMeD study, the predicted values obtained with theOLS model and those obtained with the tobit models were very similar. The regressioncoefficients of the linear model were consistently smaller than those from the tobit model. In thesimulation study, we observed that when the error variance was small (s=1), the tobit modelpresented unbiased estimations of the coefficients and accurate predicted values, specially whenthe percentage of individuals wiht the highest possible score was small. However, when theerrror variance was greater (s=10 or s=20), the percentage of explained variance for the tobitmodel and the predicted values were more similar to those obtained with an OLS model.Conclusions: The proportion of variability accounted for the models and the percentage ofindividuals with the highest possible score have an important effect in the performance of thetobit model in comparison with the linear model.