907 resultados para Generalized linear mixed model


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Hyperspectral remote sensing exploits the electromagnetic scattering patterns of the different materials at specific wavelengths [2, 3]. Hyperspectral sensors have been developed to sample the scattered portion of the electromagnetic spectrum extending from the visible region through the near-infrared and mid-infrared, in hundreds of narrow contiguous bands [4, 5]. The number and variety of potential civilian and military applications of hyperspectral remote sensing is enormous [6, 7]. Very often, the resolution cell corresponding to a single pixel in an image contains several substances (endmembers) [4]. In this situation, the scattered energy is a mixing of the endmember spectra. A challenging task underlying many hyperspectral imagery applications is then decomposing a mixed pixel into a collection of reflectance spectra, called endmember signatures, and the corresponding abundance fractions [8–10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. Linear mixing model holds approximately when the mixing scale is macroscopic [13] and there is negligible interaction among distinct endmembers [3, 14]. If, however, the mixing scale is microscopic (or intimate mixtures) [15, 16] and the incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [17], the linear model is no longer accurate. Linear spectral unmixing has been intensively researched in the last years [9, 10, 12, 18–21]. It considers that a mixed pixel is a linear combination of endmember signatures weighted by the correspondent abundance fractions. Under this model, and assuming that the number of substances and their reflectance spectra are known, hyperspectral unmixing is a linear problem for which many solutions have been proposed (e.g., maximum likelihood estimation [8], spectral signature matching [22], spectral angle mapper [23], subspace projection methods [24,25], and constrained least squares [26]). In most cases, the number of substances and their reflectances are not known and, then, hyperspectral unmixing falls into the class of blind source separation problems [27]. Independent component analysis (ICA) has recently been proposed as a tool to blindly unmix hyperspectral data [28–31]. ICA is based on the assumption of mutually independent sources (abundance fractions), which is not the case of hyperspectral data, since the sum of abundance fractions is constant, implying statistical dependence among them. This dependence compromises ICA applicability to hyperspectral images as shown in Refs. [21, 32]. In fact, ICA finds the endmember signatures by multiplying the spectral vectors with an unmixing matrix, which minimizes the mutual information among sources. If sources are independent, ICA provides the correct unmixing, since the minimum of the mutual information is obtained only when sources are independent. This is no longer true for dependent abundance fractions. Nevertheless, some endmembers may be approximately unmixed. These aspects are addressed in Ref. [33]. Under the linear mixing model, the observations from a scene are in a simplex whose vertices correspond to the endmembers. Several approaches [34–36] have exploited this geometric feature of hyperspectral mixtures [35]. Minimum volume transform (MVT) algorithm [36] determines the simplex of minimum volume containing the data. The method presented in Ref. [37] is also of MVT type but, by introducing the notion of bundles, it takes into account the endmember variability usually present in hyperspectral mixtures. The MVT type approaches are complex from the computational point of view. Usually, these algorithms find in the first place the convex hull defined by the observed data and then fit a minimum volume simplex to it. For example, the gift wrapping algorithm [38] computes the convex hull of n data points in a d-dimensional space with a computational complexity of O(nbd=2cþ1), where bxc is the highest integer lower or equal than x and n is the number of samples. The complexity of the method presented in Ref. [37] is even higher, since the temperature of the simulated annealing algorithm used shall follow a log( ) law [39] to assure convergence (in probability) to the desired solution. Aiming at a lower computational complexity, some algorithms such as the pixel purity index (PPI) [35] and the N-FINDR [40] still find the minimum volume simplex containing the data cloud, but they assume the presence of at least one pure pixel of each endmember in the data. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. PPI algorithm uses the minimum noise fraction (MNF) [41] as a preprocessing step to reduce dimensionality and to improve the signal-to-noise ratio (SNR). The algorithm then projects every spectral vector onto skewers (large number of random vectors) [35, 42,43]. The points corresponding to extremes, for each skewer direction, are stored. A cumulative account records the number of times each pixel (i.e., a given spectral vector) is found to be an extreme. The pixels with the highest scores are the purest ones. N-FINDR algorithm [40] is based on the fact that in p spectral dimensions, the p-volume defined by a simplex formed by the purest pixels is larger than any other volume defined by any other combination of pixels. This algorithm finds the set of pixels defining the largest volume by inflating a simplex inside the data. ORA SIS [44, 45] is a hyperspectral framework developed by the U.S. Naval Research Laboratory consisting of several algorithms organized in six modules: exemplar selector, adaptative learner, demixer, knowledge base or spectral library, and spatial postrocessor. The first step consists in flat-fielding the spectra. Next, the exemplar selection module is used to select spectral vectors that best represent the smaller convex cone containing the data. The other pixels are rejected when the spectral angle distance (SAD) is less than a given thresh old. The procedure finds the basis for a subspace of a lower dimension using a modified Gram–Schmidt orthogonalizati on. The selected vectors are then projected onto this subspace and a simplex is found by an MV T pro cess. ORA SIS is oriented to real-time target detection from uncrewed air vehicles using hyperspectral data [46]. In this chapter we develop a new algorithm to unmix linear mixtures of endmember spectra. First, the algorithm determines the number of endmembers and the signal subspace using a newly developed concept [47, 48]. Second, the algorithm extracts the most pure pixels present in the data. Unlike other methods, this algorithm is completely automatic and unsupervised. To estimate the number of endmembers and the signal subspace in hyperspectral linear mixtures, the proposed scheme begins by estimating sign al and noise correlation matrices. The latter is based on multiple regression theory. The signal subspace is then identified by selectin g the set of signal eigenvalue s that best represents the data, in the least-square sense [48,49 ], we note, however, that VCA works with projected and with unprojected data. The extraction of the end members exploits two facts: (1) the endmembers are the vertices of a simplex and (2) the affine transformation of a simplex is also a simplex. As PPI and N-FIND R algorithms, VCA also assumes the presence of pure pixels in the data. The algorithm iteratively projects data on to a direction orthogonal to the subspace spanned by the endmembers already determined. The new end member signature corresponds to the extreme of the projection. The algorithm iterates until all end members are exhausted. VCA performs much better than PPI and better than or comparable to N-FI NDR; yet it has a computational complexity between on e and two orders of magnitude lower than N-FINDR. The chapter is structure d as follows. Section 19.2 describes the fundamentals of the proposed method. Section 19.3 and Section 19.4 evaluate the proposed algorithm using simulated and real data, respectively. Section 19.5 presents some concluding remarks.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Hyperspectral unmixing methods aim at the decomposition of a hyperspectral image into a collection endmember signatures, i.e., the radiance or reflectance of the materials present in the scene, and the correspondent abundance fractions at each pixel in the image. This paper introduces a new unmixing method termed dependent component analysis (DECA). This method is blind and fully automatic and it overcomes the limitations of unmixing methods based on Independent Component Analysis (ICA) and on geometrical based approaches. DECA is based on the linear mixture model, i.e., each pixel is a linear mixture of the endmembers signatures weighted by the correspondent abundance fractions. These abundances are modeled as mixtures of Dirichlet densities, thus enforcing the non-negativity and constant sum constraints, imposed by the acquisition process. The endmembers signatures are inferred by a generalized expectation-maximization (GEM) type algorithm. The paper illustrates the effectiveness of DECA on synthetic and real hyperspectral images.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dissertação de mestrado integrado em Psicologia

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Observational and experimental studies have shown that increased concealment of bird nests reduces nest predation rates. The objective of the present study was to evaluate differences in predation rates between two experimental manipulations of artificial ground nests (i.e., clearing an area around the artificial nest or leaving it as natural as possible), and test whether environmental variables also affected nest predation in an undisturbed area of Amazonian forest in eastern Brazil. A generalized linear model was used to examine the influence of five variables (manipulation type, perpendicular distance from the main trail, total basal area of trees surrounding the nest site, understorey density, and liana quantity) on nest predation rates. Model results, showed that manipulation type was the only variable that significantly affected nest predation rates. Thus, to avoid systematic biases, the influence of nest site manipulation must be taken into consideration when conducting experiments with artificial nests.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Tese de Doutoramento em Ciências (Especialidade em Matemática)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objective To investigate the relation between gait parameters and cognitive impairments in subjects with Parkinson’s disease (PD) and Alzheimer’s disease (AD) during the performance of dual tasks. Methods This was a cross-sectional study involving 126 subjects divided into three groups: Parkinson group (n = 43), Alzheimer group (n = 38), and control group (n = 45). The subjects were evaluated using the Timed Up and Go test administered with motor and cognitive distracters. Gait analyses consisted of cadence and speed measurements, with cognitive functions being assessed by the Brief Cognitive Screening Battery and the Clock Drawing Test. Statistical procedures included mixed-design analyses of variance to observe the gait patterns between groups and tasks and the linear regression model to investigate the influence of cognitive functions in this process. A 5% significant level was adopted. Results Regarding the subjects’ speed, the data show a significant difference between group vs task interaction (p = 0.009), with worse performance of subjects with PD in motor dual task and of subjects with AD in cognitive dual task. With respect to cadence, no statistical differences was seen between group vs task interaction (p = 0.105), showing low interference of the clinical conditions on such parameter. The linear regression model showed that up to 45.79%, of the variance in gait can be explained by the interference of cognitive processes. Conclusion Dual task activities affect gait pattern in subjects with PD and AD. Differences between groups reflect peculiarities of each disease and show a direct interference of cognitive processes on complex tasks.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Curcumin and caffeine (used as lipophilic and hydrophilic model compounds, respectively) were successfully encapsulated in lactoferrin-glycomacropeptide (Lf-GMP) nanohydrogels by thermal gelation showing high encapsulation efficiencies (>90 %). FTIR spectroscopy confirmed the encapsulation of bioactive compounds in Lf-GMP nanohydrogels and revealed that according to the encapsulated compound different interactions occur with the nanohydrogel matrix. The successful encapsulation of bioactive compounds in Lf-GMP nanohydrogels was also confirmed by fluorescence measurements and confocal laser scanning microscopy. TEM images showed that loaded nanohydrogels maintain their spherical shape with sizes of 112 and 126 nm for curcumin and caffeine encapsulated in Lf-GMP nanohydrogels, respectively; in both cases a polydispersity of 0.2 was obtained. The release mechanisms of bioactive compounds through Lf-GMP nanohydrogels were evaluated at pH 2 and pH 7, by fitting the Linear Superimposition Model to the experimental data. The bioactive compounds release was found to be pH-dependent: at pH 2, relaxation is the governing phenomenon for curcumin and caffeine compounds and at pH 7 Ficks diffusion is the main mechanism of caffeine release while curcumin was not released through Lf-GMP nanohydrogels.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

experimental design, mixed model, random coefficient regression model, population pharmacokinetics, approximate design

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The determination of characteristic cardiac parameters, such as displacement, stress and strain distribution are essential for an understanding of the mechanics of the heart. The calculation of these parameters has been limited until recently by the use of idealised mathematical representations of biventricular geometries and by applying simple material laws. On the basis of 20 short axis heart slices and in consideration of linear and nonlinear material behaviour we have developed a FE model with about 100,000 degrees of freedom. Marching Cubes and Phong's incremental shading technique were used to visualise the three dimensional geometry. In a quasistatic FE analysis continuous distribution of regional stress and strain corresponding to the endsystolic state were calculated. Substantial regional variation of the Von Mises stress and the total strain energy were observed at all levels of the heart model. The results of both the linear elastic model and the model with a nonlinear material description (Mooney-Rivlin) were compared. While the stress distribution and peak stress values were found to be comparable, the displacement vectors obtained with the nonlinear model were generally higher in comparison with the linear elastic case indicating the need to include nonlinear effects.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

1. Model-based approaches have been used increasingly in conservation biology over recent years. Species presence data used for predictive species distribution modelling are abundant in natural history collections, whereas reliable absence data are sparse, most notably for vagrant species such as butterflies and snakes. As predictive methods such as generalized linear models (GLM) require absence data, various strategies have been proposed to select pseudo-absence data. However, only a few studies exist that compare different approaches to generating these pseudo-absence data. 2. Natural history collection data are usually available for long periods of time (decades or even centuries), thus allowing historical considerations. However, this historical dimension has rarely been assessed in studies of species distribution, although there is great potential for understanding current patterns, i.e. the past is the key to the present. 3. We used GLM to model the distributions of three 'target' butterfly species, Melitaea didyma, Coenonympha tullia and Maculinea teleius, in Switzerland. We developed and compared four strategies for defining pools of pseudo-absence data and applied them to natural history collection data from the last 10, 30 and 100 years. Pools included: (i) sites without target species records; (ii) sites where butterfly species other than the target species were present; (iii) sites without butterfly species but with habitat characteristics similar to those required by the target species; and (iv) a combination of the second and third strategies. Models were evaluated and compared by the total deviance explained, the maximized Kappa and the area under the curve (AUC). 4. Among the four strategies, model performance was best for strategy 3. Contrary to expectations, strategy 2 resulted in even lower model performance compared with models with pseudo-absence data simulated totally at random (strategy 1). 5. Independent of the strategy model, performance was enhanced when sites with historical species presence data were not considered as pseudo-absence data. Therefore, the combination of strategy 3 with species records from the last 100 years achieved the highest model performance. 6. Synthesis and applications. The protection of suitable habitat for species survival or reintroduction in rapidly changing landscapes is a high priority among conservationists. Model-based approaches offer planning authorities the possibility of delimiting priority areas for species detection or habitat protection. The performance of these models can be enhanced by fitting them with pseudo-absence data relying on large archives of natural history collection species presence data rather than using randomly sampled pseudo-absence data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Besides CYP2B6, other polymorphic enzymes contribute to efavirenz (EFV) interindividual variability. This study was aimed at quantifying the impact of multiple alleles on EFV disposition. Plasma samples from 169 human immunodeficiency virus (HIV) patients characterized for CYP2B6, CYP2A6, and CYP3A4/5 allelic diversity were used to build up a population pharmacokinetic model using NONMEM (non-linear mixed effects modeling), the aim being to seek a general approach combining genetic and demographic covariates. Average clearance (CL) was 11.3 l/h with a 65% interindividual variability that was explained largely by CYP2B6 genetic variation (31%). CYP2A6 and CYP3A4 had a prominent influence on CL, mostly when CYP2B6 was impaired. Pharmacogenetics fully accounted for ethnicity, leaving body weight as the only significant demographic factor influencing CL. Square roots of the numbers of functional alleles best described the influence of each gene, without interaction. Functional genetic variations in both principal and accessory metabolic pathways demonstrate a joint impact on EFV disposition. Therefore, dosage adjustment in accordance with the type of polymorphism (CYP2B6, CYP2A6, or CYP3A4) is required in order to maintain EFV within the therapeutic target levels.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper extends the Nelson-Siegel linear factor model by developing a flexible macro-finance framework for modeling and forecasting the term structure of US interest rates. Our approach is robust to parameter uncertainty and structural change, as we consider instabilities in parameters and volatilities, and our model averaging method allows for investors' model uncertainty over time. Our time-varying parameter Nelson-Siegel Dynamic Model Averaging (NS-DMA) predicts yields better than standard benchmarks and successfully captures plausible time-varying term premia in real time. The proposed model has significant in-sample and out-of-sample predictability for excess bond returns, and the predictability is of economic value.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Western European landscapes have drastically changed since the 1950s, with agricultural intensifications and the spread of urban settlements considered the most important drivers of this land-use/land-cover change. Losses of habitat for fauna and flora have been a direct consequence of this development. In the present study, we relate butterfly occurrence to land-use/land-cover changes over five decades between 1951 and 2000. The study area covers the entire Swiss territory. The 10 explanatory variables originate from agricultural statistics and censuses. Both state as well as rate was used as explanatory variables. Species distribution data were obtained from natural history collections. We selected eight butterfly species: four species occur on wetlands and four occur on dry grasslands. We used cluster analysis to track land-use/land-cover changes and to group communes based on similar trajectories of change. Generalized linear models were applied to identify factors that were significantly correlated with the persistence or disappearance of butterfly species. Results showed that decreasing agricultural areas and densities of farms with more than 10 ha of cultivated land are significantly related with wetland species decline, and increasing densities of livestock seem to have favored disappearance of dry grassland species. Moreover, we show that species declines are not only dependent on land-use/land-cover states but also on the rates of change; that is, the higher the transformation rate from small to large farms, the higher the loss of dry grassland species. We suggest that more attention should be paid to the rates of landscape change as feasible drivers of species change and derive some management suggestions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Species distribution models (SDMs) are widely used to explain and predict species ranges and environmental niches. They are most commonly constructed by inferring species' occurrence-environment relationships using statistical and machine-learning methods. The variety of methods that can be used to construct SDMs (e.g. generalized linear/additive models, tree-based models, maximum entropy, etc.), and the variety of ways that such models can be implemented, permits substantial flexibility in SDM complexity. Building models with an appropriate amount of complexity for the study objectives is critical for robust inference. We characterize complexity as the shape of the inferred occurrence-environment relationships and the number of parameters used to describe them, and search for insights into whether additional complexity is informative or superfluous. By building 'under fit' models, having insufficient flexibility to describe observed occurrence-environment relationships, we risk misunderstanding the factors shaping species distributions. By building 'over fit' models, with excessive flexibility, we risk inadvertently ascribing pattern to noise or building opaque models. However, model selection can be challenging, especially when comparing models constructed under different modeling approaches. Here we argue for a more pragmatic approach: researchers should constrain the complexity of their models based on study objective, attributes of the data, and an understanding of how these interact with the underlying biological processes. We discuss guidelines for balancing under fitting with over fitting and consequently how complexity affects decisions made during model building. Although some generalities are possible, our discussion reflects differences in opinions that favor simpler versus more complex models. We conclude that combining insights from both simple and complex SDM building approaches best advances our knowledge of current and future species ranges.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: The imatinib trough plasma concentration (C(min)) correlates with clinical response in cancer patients. Therapeutic drug monitoring (TDM) of plasma C(min) is therefore suggested. In practice, however, blood sampling for TDM is often not performed at trough. The corresponding measurement is thus only remotely informative about C(min) exposure. Objectives: The objectives of this study were to improve the interpretation of randomly measured concentrations by using a Bayesian approach for the prediction of C(min), incorporating correlation between pharmacokinetic parameters, and to compare the predictive performance of this method with alternative approaches, by comparing predictions with actual measured trough levels, and with predictions obtained by a reference method, respectively. Methods: A Bayesian maximum a posteriori (MAP) estimation method accounting for correlation (MAP-ρ) between pharmacokinetic parameters was developed on the basis of a population pharmacokinetic model, which was validated on external data. Thirty-one paired random and trough levels, observed in gastrointestinal stromal tumour patients, were then used for the evaluation of the Bayesian MAP-ρ method: individual C(min) predictions, derived from single random observations, were compared with actual measured trough levels for assessment of predictive performance (accuracy and precision). The method was also compared with alternative approaches: classical Bayesian MAP estimation assuming uncorrelated pharmacokinetic parameters, linear extrapolation along the typical elimination constant of imatinib, and non-linear mixed-effects modelling (NONMEM) first-order conditional estimation (FOCE) with interaction. Predictions of all methods were finally compared with 'best-possible' predictions obtained by a reference method (NONMEM FOCE, using both random and trough observations for individual C(min) prediction). Results: The developed Bayesian MAP-ρ method accounting for correlation between pharmacokinetic parameters allowed non-biased prediction of imatinib C(min) with a precision of ±30.7%. This predictive performance was similar for the alternative methods that were applied. The range of relative prediction errors was, however, smallest for the Bayesian MAP-ρ method and largest for the linear extrapolation method. When compared with the reference method, predictive performance was comparable for all methods. The time interval between random and trough sampling did not influence the precision of Bayesian MAP-ρ predictions. Conclusion: Clinical interpretation of randomly measured imatinib plasma concentrations can be assisted by Bayesian TDM. Classical Bayesian MAP estimation can be applied even without consideration of the correlation between pharmacokinetic parameters. Individual C(min) predictions are expected to vary less through Bayesian TDM than linear extrapolation. Bayesian TDM could be developed in the future for other targeted anticancer drugs and for the prediction of other pharmacokinetic parameters that have been correlated with clinical outcomes.