907 resultados para Generalized linear mixed model
Resumo:
Imbalance and weakness of the serratus anterior and upper trapezius force couple have been described in patients with shoulder dysfunction. There is interest in identifying exercises that selectively activate these muscles and including it in rehabilitation protocols. This study aims to verify the UT/SA electromyographic (EMG) amplitude ratio, performed in different upper limb exercises and on two bases of support. Twelve healthy men were tested (average age = 22.8 +/- 3.1 years), and surface EMG was recorded from the upper trapezius and serratus anterior using single differential surface electrodes. Volunteers performed isometric contractions over a stable base of support and on a Swiss ball during the wall push-up (WP), bench press (BP), and push-up (PU) exercises. All SEMG data are reported as a percentage of root mean square or integral of linear envelope from the maximal value obtained in one of three maximal voluntary contractions for each muscle studied. A linear mixed-effect model was performed to compare UT/SA ratio values. The WP, BP, and PU exercises showed UT/SA ratio mean +/- SD values of 0.69 +/- 0.72, 0.14 +/- 0.12, and 0.39 +/- 0.37 for stable surfaces, respectively, whereas for unstable surfaces, the values were 0.73 +/- 0.67, 0.43 +/- 0.39, and 0.32 +/- 0.30. The results demonstrate that UT/SA ratio was influenced by the exercises and by the upper limb base of support. The practical application is to show that BP on a stable surface is the exercise preferred over WP and PU on either surfaces for serratus anterior muscle training in patients with imbalance between the UT/SA force couple or serratus anterior weakness.
Resumo:
Background. Mucogingival alterations are inherent to clefts and may be worsened by the several plastic surgeries required in these individuals. Objective. The aim of this study was to evaluate the prevalence, severity, and some possible etiologic factors of gingival recessions in teeth adjacent to the cleft. Study design. A total of 641 teeth ( maxillary canines and central incisors) of 193 individuals with cleft lip and/or palate were examined. A generalized linear model was used, and the Wilcoxon test was used to compare the recession with cleft types. Results. Comparison among cleft types as to the presence of recession revealed a statistically significant positive relationship for the maxillary right and left central incisors only in the group with left cleft lip, alveolus, and palate (P = .034). The most frequently affected tooth was the right maxillary canine (26.16%). Conclusion. The prevalence of recession in teeth close to the cleft was higher, although it was not very severe. (Oral Surg Oral Med Oral Pathol Oral Radiol Endod 2010; 109: 37-45)
Resumo:
Historically, few articles have addressed the use of district level mill production data for analysing the effect of varietal change on sugarcane productivity trends. This appears to be due to lack of compiled district data sets and appropriate methods by which to analyse these data. Recently, varietal data on tonnes of sugarcane per hectare (TCH), sugar content (CCS), and their product, tonnes of sugar content per hectare (TSH) on a district basis, have been compiled. This study was conducted to develop a methodology for regular analysis of such data from mill districts to assess productivity trends over time, accounting for variety and variety x environment interaction effects for 3 mill districts (Mulgrave, Babinda, and Tully) from 1958 to 1995. Restricted maximum likelihood methodology was used to analyse the district level data and best linear unbiased predictors for random effects, and best linear unbiased estimates for fixed effects were computed in a mixed model analysis. In the combined analysis over districts, Q124 was the top ranking variety for TCH, and Q120 was top ranking for both CCS and TSH. Overall production for TCH increased over the 38-year period investigated. Some of this increase can be attributed to varietal improvement, although the predictors for TCH have shown little progress since the introduction of Q99 in 1976. Although smaller gains have been made in varietal improvement for CCS, overall production for CCS decreased over the 38 years due to non-varietal factors. Varietal improvement in TSH appears to have peaked in the mid-1980s. Overall production for TSH remained stable over time due to the varietal increase in TCH and the non-varietal decrease in CCS.
Resumo:
This paper proposes a template for modelling complex datasets that integrates traditional statistical modelling approaches with more recent advances in statistics and modelling through an exploratory framework. Our approach builds on the well-known and long standing traditional idea of 'good practice in statistics' by establishing a comprehensive framework for modelling that focuses on exploration, prediction, interpretation and reliability assessment, a relatively new idea that allows individual assessment of predictions. The integrated framework we present comprises two stages. The first involves the use of exploratory methods to help visually understand the data and identify a parsimonious set of explanatory variables. The second encompasses a two step modelling process, where the use of non-parametric methods such as decision trees and generalized additive models are promoted to identify important variables and their modelling relationship with the response before a final predictive model is considered. We focus on fitting the predictive model using parametric, non-parametric and Bayesian approaches. This paper is motivated by a medical problem where interest focuses on developing a risk stratification system for morbidity of 1,710 cardiac patients given a suite of demographic, clinical and preoperative variables. Although the methods we use are applied specifically to this case study, these methods can be applied across any field, irrespective of the type of response.
Resumo:
The management of energy resources for islanded operation is of crucial importance for the successful use of renewable energy sources. A Virtual Power Producer (VPP) can optimally operate the resources taking into account the maintenance, operation and load control considering all the involved cost. This paper presents the methodology approach to formulate and solve the problem of determining the optimal resource allocation applied to a real case study in Budapest Tech’s. The problem is formulated as a mixed-integer linear programming model (MILP) and solved by a deterministic optimization technique CPLEX-based implemented in General Algebraic Modeling Systems (GAMS). The problem has also been solved by Evolutionary Particle Swarm Optimization (EPSO). The obtained results are presented and compared.
Resumo:
In the energy management of a small power system, the scheduling of the generation units is a crucial problem for which adequate methodologies can maximize the performance of the energy supply. This paper proposes an innovative methodology for distributed energy resources management. The optimal operation of distributed generation, demand response and storage resources is formulated as a mixed-integer linear programming model (MILP) and solved by a deterministic optimization technique CPLEX-based implemented in General Algebraic Modeling Systems (GAMS). The paper deals with a vision for the grids of the future, focusing on conceptual and operational aspects of electrical grids characterized by an intensive penetration of DG, in the scope of competitive environments and using artificial intelligence methodologies to attain the envisaged goals. These concepts are implemented in a computational framework which includes both grid and market simulation.
Resumo:
Dissertação de Mestrado, Estudos Integrados dos Oceanos, 25 de Março de 2013, Universidade dos Açores.
Resumo:
Environmental pollution continues to be an emerging study field, as there are thousands of anthropogenic compounds mixed in the environment whose possible mechanisms of toxicity and physiological outcomes are of great concern. Developing methods to access and prioritize the screening of these compounds at trace levels in order to support regulatory efforts is, therefore, very important. A methodology based on solid phase extraction followed by derivatization and gas chromatography-mass spectrometry analysis was developed for the assessment of four endocrine disrupting compounds (EDCs) in water matrices: bisphenol A, estrone, 17b-estradiol and 17a-ethinylestradiol. The study was performed, simultaneously, by two different laboratories in order to evaluate the robustness of the method and to increase the quality control over its application in routine analysis. Validation was done according to the International Conference on Harmonisation recommendations and other international guidelines with specifications for the GC-MS methodology. Matrix-induced chromatographic response enhancement was avoided by using matrix-standard calibration solutions and heteroscedasticity has been overtaken by a weighted least squares linear regression model application. Consistent evaluation of key analytical parameters such as extraction efficiency, sensitivity, specificity, linearity, limits of detection and quantification, precision, accuracy and robustness was done in accordance with standards established for acceptance. Finally, the application of the optimized method in the assessment of the selected analytes in environmental samples suggested that it is an expedite methodology for routine analysis of EDC residues in water matrices.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Este trabalho baseia-se num caso de estudo real de planeamento de operações de armazenagem num silo rural de cereais, e enquadra-se nos problemas de planeamento e programação de armazéns. Os programadores deparam-se diariamente com o problema de arranjar a melhor solução de transferência entre células de armazenagem, tentando maximizar o número de células vazias, por forma a ter maior capacidade para receber novos lotes, respeitando as restrições de receção e expedição, e as restrições de capacidade das linhas de transporte. Foi desenvolvido um modelo matemático de programação linear inteira mista e uma aplicação em Excel, com recurso ao VBA, para a sua implementação. Esta implementação abrangeu todo o processo relativo à atividade em causa, isto é, vai desde a recolha de dados, seu tratamento e análise, até à solução final de distribuição dos vários produtos pelas várias células. Os resultados obtidos mostram que o modelo otimiza o número de células vazias, tendo em conta os produtos que estão armazenados mais os que estão para ser rececionados e expedidos, em tempo computacional inferior a 60 segundos, constituindo, assim, uma importante mais valia para a empresa em causa.
Resumo:
Proceedings of International Conference - SPIE 7477, Image and Signal Processing for Remote Sensing XV - 28 September 2009
Resumo:
OBJECTIVE: To evaluate the growth parameters in infants who were born to HIV-1-infected mothers. METHODS: The study was a longitudinal evaluation of the z-scores for the weight-for-age (WAZ), weight-for-length (WLZ) and length-for-age (LAZ) data collected from a cohort. A total of 97 non-infected and 33 HIV-infected infants born to HIV-1-infected mothers in Belo Horizonte, Southeastern Brazil, between 1995 and 2003 was studied. The average follow-up period for the infected and non-infected children was 15.8 months (variation: 6.8 to 18.0 months) and 14.3 months (variation: 6.3 to 18.6 months), respectively. A mixed-effects linear regression model was used and was fitted using a restricted maximum likelihood. RESULTS: There was an observed decrease over time in the WAZ, LAZ and WLZ among the infected infants. At six months of age, the mean differences in the WAZ, LAZ and WLZ between the HIV-infected and non-infected infants were 1.02, 0.59, and 0.63 standard deviations, respectively. At 12 months, the mean differences in the WAZ, LAZ and WLZ between the HIV-infected and non-infected infants were 1.15, 1.01, and 0.87 standard deviations, respectively. CONCLUSIONS: The precocious and increasing deterioration of the HIV-infected infants' anthropometric indicators demonstrates the importance of the early identification of HIV-infected infants who are at nutritional risk and the importance of the continuous assessment of nutritional interventions for these infants.
Resumo:
OBJECTIVE To analyze the coverage of a cervical cancer screening program in a city with a high incidence of the disease in addition to the factors associated with non-adherence to the current preventive program.METHODS A cross-sectional study based on household surveys was conducted. The sample was composed of women between 25 and 59 years of age of the city of Boa Vista, RR, Northern Brazil who were covered by the cervical cancer screening program. The cluster sampling method was used. The dependent variable was participation in a women’s health program, defined as undergoing at least one Pap smear in the 36 months prior to the interview; the explanatory variables were extracted from individual data. A generalized linear model was used.RESULTS 603 women were analyzed, with an mean age of 38.2 years (SD = 10.2). Five hundred and seventeen women underwent the screening test, and the prevalence of adherence in the last three years was up to 85.7% (95%CI 82.5;88.5). A high per capita household income and recent medical consultation were associated with the lower rate of not being tested in multivariate analysis. Disease ignorance, causes, and prevention methods were correlated with chances of non-adherence to the screening system; 20.0% of the women were reported to have undergone opportunistic and non-routine screening.CONCLUSIONS The informed level of coverage is high, exceeding the level recommended for the control of cervical cancer. The preventive program appears to be opportunistic in nature, particularly for the most vulnerable women (with low income and little information on the disease). Studies on the diagnostic quality of cervicovaginal cytology and therapeutic schedules for positive cases are necessary for understanding the barriers to the control of cervical cancer.
Resumo:
Dissertation submitted in partial fulfilment of the requirements for the Degree of Master of Science in Geospatial Technologies
Resumo:
The development of high spatial resolution airborne and spaceborne sensors has improved the capability of ground-based data collection in the fields of agriculture, geography, geology, mineral identification, detection [2, 3], and classification [4–8]. The signal read by the sensor from a given spatial element of resolution and at a given spectral band is a mixing of components originated by the constituent substances, termed endmembers, located at that element of resolution. This chapter addresses hyperspectral unmixing, which is the decomposition of the pixel spectra into a collection of constituent spectra, or spectral signatures, and their corresponding fractional abundances indicating the proportion of each endmember present in the pixel [9, 10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. The linear mixing model holds when the mixing scale is macroscopic [13]. The nonlinear model holds when the mixing scale is microscopic (i.e., intimate mixtures) [14, 15]. The linear model assumes negligible interaction among distinct endmembers [16, 17]. The nonlinear model assumes that incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [18]. Under the linear mixing model and assuming that the number of endmembers and their spectral signatures are known, hyperspectral unmixing is a linear problem, which can be addressed, for example, under the maximum likelihood setup [19], the constrained least-squares approach [20], the spectral signature matching [21], the spectral angle mapper [22], and the subspace projection methods [20, 23, 24]. Orthogonal subspace projection [23] reduces the data dimensionality, suppresses undesired spectral signatures, and detects the presence of a spectral signature of interest. The basic concept is to project each pixel onto a subspace that is orthogonal to the undesired signatures. As shown in Settle [19], the orthogonal subspace projection technique is equivalent to the maximum likelihood estimator. This projection technique was extended by three unconstrained least-squares approaches [24] (signature space orthogonal projection, oblique subspace projection, target signature space orthogonal projection). Other works using maximum a posteriori probability (MAP) framework [25] and projection pursuit [26, 27] have also been applied to hyperspectral data. In most cases the number of endmembers and their signatures are not known. Independent component analysis (ICA) is an unsupervised source separation process that has been applied with success to blind source separation, to feature extraction, and to unsupervised recognition [28, 29]. ICA consists in finding a linear decomposition of observed data yielding statistically independent components. Given that hyperspectral data are, in given circumstances, linear mixtures, ICA comes to mind as a possible tool to unmix this class of data. In fact, the application of ICA to hyperspectral data has been proposed in reference 30, where endmember signatures are treated as sources and the mixing matrix is composed by the abundance fractions, and in references 9, 25, and 31–38, where sources are the abundance fractions of each endmember. In the first approach, we face two problems: (1) The number of samples are limited to the number of channels and (2) the process of pixel selection, playing the role of mixed sources, is not straightforward. In the second approach, ICA is based on the assumption of mutually independent sources, which is not the case of hyperspectral data, since the sum of the abundance fractions is constant, implying dependence among abundances. This dependence compromises ICA applicability to hyperspectral images. In addition, hyperspectral data are immersed in noise, which degrades the ICA performance. IFA [39] was introduced as a method for recovering independent hidden sources from their observed noisy mixtures. IFA implements two steps. First, source densities and noise covariance are estimated from the observed data by maximum likelihood. Second, sources are reconstructed by an optimal nonlinear estimator. Although IFA is a well-suited technique to unmix independent sources under noisy observations, the dependence among abundance fractions in hyperspectral imagery compromises, as in the ICA case, the IFA performance. Considering the linear mixing model, hyperspectral observations are in a simplex whose vertices correspond to the endmembers. Several approaches [40–43] have exploited this geometric feature of hyperspectral mixtures [42]. Minimum volume transform (MVT) algorithm [43] determines the simplex of minimum volume containing the data. The MVT-type approaches are complex from the computational point of view. Usually, these algorithms first find the convex hull defined by the observed data and then fit a minimum volume simplex to it. Aiming at a lower computational complexity, some algorithms such as the vertex component analysis (VCA) [44], the pixel purity index (PPI) [42], and the N-FINDR [45] still find the minimum volume simplex containing the data cloud, but they assume the presence in the data of at least one pure pixel of each endmember. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. Hyperspectral sensors collects spatial images over many narrow contiguous bands, yielding large amounts of data. For this reason, very often, the processing of hyperspectral data, included unmixing, is preceded by a dimensionality reduction step to reduce computational complexity and to improve the signal-to-noise ratio (SNR). Principal component analysis (PCA) [46], maximum noise fraction (MNF) [47], and singular value decomposition (SVD) [48] are three well-known projection techniques widely used in remote sensing in general and in unmixing in particular. The newly introduced method [49] exploits the structure of hyperspectral mixtures, namely the fact that spectral vectors are nonnegative. The computational complexity associated with these techniques is an obstacle to real-time implementations. To overcome this problem, band selection [50] and non-statistical [51] algorithms have been introduced. This chapter addresses hyperspectral data source dependence and its impact on ICA and IFA performances. The study consider simulated and real data and is based on mutual information minimization. Hyperspectral observations are described by a generative model. This model takes into account the degradation mechanisms normally found in hyperspectral applications—namely, signature variability [52–54], abundance constraints, topography modulation, and system noise. The computation of mutual information is based on fitting mixtures of Gaussians (MOG) to data. The MOG parameters (number of components, means, covariances, and weights) are inferred using the minimum description length (MDL) based algorithm [55]. We study the behavior of the mutual information as a function of the unmixing matrix. The conclusion is that the unmixing matrix minimizing the mutual information might be very far from the true one. Nevertheless, some abundance fractions might be well separated, mainly in the presence of strong signature variability, a large number of endmembers, and high SNR. We end this chapter by sketching a new methodology to blindly unmix hyperspectral data, where abundance fractions are modeled as a mixture of Dirichlet sources. This model enforces positivity and constant sum sources (full additivity) constraints. The mixing matrix is inferred by an expectation-maximization (EM)-type algorithm. This approach is in the vein of references 39 and 56, replacing independent sources represented by MOG with mixture of Dirichlet sources. Compared with the geometric-based approaches, the advantage of this model is that there is no need to have pure pixels in the observations. The chapter is organized as follows. Section 6.2 presents a spectral radiance model and formulates the spectral unmixing as a linear problem accounting for abundance constraints, signature variability, topography modulation, and system noise. Section 6.3 presents a brief resume of ICA and IFA algorithms. Section 6.4 illustrates the performance of IFA and of some well-known ICA algorithms with experimental data. Section 6.5 studies the ICA and IFA limitations in unmixing hyperspectral data. Section 6.6 presents results of ICA based on real data. Section 6.7 describes the new blind unmixing scheme and some illustrative examples. Section 6.8 concludes with some remarks.