907 resultados para General Linear Methods
Resumo:
In this paper an algorithm for the calculation of the root locus of fractional linear systems is presented. The proposed algorithm takes advantage of present day computational resources and processes directly the characteristic equation, avoiding the limitations revealed by standard methods. The results demonstrate the good performance for different types of expressions.
Resumo:
OBJECTIVE To analyze if differences according to gender exists in the association between tooth loss and obesity among older adults.METHODS We analyzed data on 1,704 older adults (60 years and over) from the baseline of a prospective cohort study conducted in Florianopolis, SC, Southern Brazil. Multivariable logistic regression models were used to assess the association between tooth loss and general and central obesity after adjustment for confounders (age, gender, skin color, educational attainment, income, smoking, physical activity, use of dentures, hypertension, and diabetes). Linear regressions were also assessed with body mass index and waist circumference as continuous outcomes. Interaction between gender and tooth loss was further assessed.RESULTS Overall mean body mass index was 28.0 kg/m2. Mean waist circumference was 96.8 cm for males and 92.6 cm for females. Increasing tooth loss was positively associated with increased body mass index and waist circumference after adjustment for confounders. Edentates had 1.4 (95%CI 1.1;1.9) times higher odds of being centrally obese than individuals with a higher number of teeth; however, the association lost significance after adjustment for confounders. In comparison with edentate males, edentate females presented a twofold higher adjusted prevalence of general and central obesity. In the joint effects model, edentate females had a 3.8 (95%CI 2.2;6.6) times higher odds to be centrally obese in comparison with males with more than 10 teeth present in both the arches. Similarly, females with less than 10 teeth in at least one arch had a 2.7 (95%CI 1.6;4.4) times higher odds ratio of having central obesity in comparison with males with more than 10 teeth present in both the arches.CONCLUSIONS Central obesity was more prevalent than general obesity among the older adults. We did not observe any association between general obesity and tooth loss. The association between central obesity and tooth loss depends on gender – females with tooth loss had greater probability of being obese.
Resumo:
ABSTRACT OBJECTIVE To analyze if maternal mental health is associated with infant nutritional status at six month of age. METHODS A cross-sectional study with 228 six-month-old infants who used primary health care units of the city of Rio de Janeiro, Southeastern Brazil. Mean weight-for-length and mean weight-for-age were expressed in z-scores considering the 2006 World Health Organization reference curves. Maternal mental health was measured by the 12-item General Health Questionnaire. The following cutoff points were used: ≥ 3 for common mental disorders, ≥ 5 for more severe mental disorders, and ≥ 9 for depression. The statistical analysis employed adjusted linear regression models. RESULTS The prevalence of common mental disorders, more severe mental disorders and depression was 39.9%, 23.7%, and 8.3%, respectively. Children of women with more severe mental disorders had, on average, a weight-for-length 0.37 z-scores lower than children of women without this health harm (p = 0.026). We also observed that the weight-for-length indicator of children of depressed mothers was, on average, 0.67 z-scores lower than that of children of nondepressed women (p = 0.010). Maternal depression was associated with lower mean values of weight-for-age z-scores (p = 0.041). CONCLUSIONS Maternal mental health is positively related to the inadequacy of the nutritional status of infants at six months.
Resumo:
The development of high spatial resolution airborne and spaceborne sensors has improved the capability of ground-based data collection in the fields of agriculture, geography, geology, mineral identification, detection [2, 3], and classification [4–8]. The signal read by the sensor from a given spatial element of resolution and at a given spectral band is a mixing of components originated by the constituent substances, termed endmembers, located at that element of resolution. This chapter addresses hyperspectral unmixing, which is the decomposition of the pixel spectra into a collection of constituent spectra, or spectral signatures, and their corresponding fractional abundances indicating the proportion of each endmember present in the pixel [9, 10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. The linear mixing model holds when the mixing scale is macroscopic [13]. The nonlinear model holds when the mixing scale is microscopic (i.e., intimate mixtures) [14, 15]. The linear model assumes negligible interaction among distinct endmembers [16, 17]. The nonlinear model assumes that incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [18]. Under the linear mixing model and assuming that the number of endmembers and their spectral signatures are known, hyperspectral unmixing is a linear problem, which can be addressed, for example, under the maximum likelihood setup [19], the constrained least-squares approach [20], the spectral signature matching [21], the spectral angle mapper [22], and the subspace projection methods [20, 23, 24]. Orthogonal subspace projection [23] reduces the data dimensionality, suppresses undesired spectral signatures, and detects the presence of a spectral signature of interest. The basic concept is to project each pixel onto a subspace that is orthogonal to the undesired signatures. As shown in Settle [19], the orthogonal subspace projection technique is equivalent to the maximum likelihood estimator. This projection technique was extended by three unconstrained least-squares approaches [24] (signature space orthogonal projection, oblique subspace projection, target signature space orthogonal projection). Other works using maximum a posteriori probability (MAP) framework [25] and projection pursuit [26, 27] have also been applied to hyperspectral data. In most cases the number of endmembers and their signatures are not known. Independent component analysis (ICA) is an unsupervised source separation process that has been applied with success to blind source separation, to feature extraction, and to unsupervised recognition [28, 29]. ICA consists in finding a linear decomposition of observed data yielding statistically independent components. Given that hyperspectral data are, in given circumstances, linear mixtures, ICA comes to mind as a possible tool to unmix this class of data. In fact, the application of ICA to hyperspectral data has been proposed in reference 30, where endmember signatures are treated as sources and the mixing matrix is composed by the abundance fractions, and in references 9, 25, and 31–38, where sources are the abundance fractions of each endmember. In the first approach, we face two problems: (1) The number of samples are limited to the number of channels and (2) the process of pixel selection, playing the role of mixed sources, is not straightforward. In the second approach, ICA is based on the assumption of mutually independent sources, which is not the case of hyperspectral data, since the sum of the abundance fractions is constant, implying dependence among abundances. This dependence compromises ICA applicability to hyperspectral images. In addition, hyperspectral data are immersed in noise, which degrades the ICA performance. IFA [39] was introduced as a method for recovering independent hidden sources from their observed noisy mixtures. IFA implements two steps. First, source densities and noise covariance are estimated from the observed data by maximum likelihood. Second, sources are reconstructed by an optimal nonlinear estimator. Although IFA is a well-suited technique to unmix independent sources under noisy observations, the dependence among abundance fractions in hyperspectral imagery compromises, as in the ICA case, the IFA performance. Considering the linear mixing model, hyperspectral observations are in a simplex whose vertices correspond to the endmembers. Several approaches [40–43] have exploited this geometric feature of hyperspectral mixtures [42]. Minimum volume transform (MVT) algorithm [43] determines the simplex of minimum volume containing the data. The MVT-type approaches are complex from the computational point of view. Usually, these algorithms first find the convex hull defined by the observed data and then fit a minimum volume simplex to it. Aiming at a lower computational complexity, some algorithms such as the vertex component analysis (VCA) [44], the pixel purity index (PPI) [42], and the N-FINDR [45] still find the minimum volume simplex containing the data cloud, but they assume the presence in the data of at least one pure pixel of each endmember. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. Hyperspectral sensors collects spatial images over many narrow contiguous bands, yielding large amounts of data. For this reason, very often, the processing of hyperspectral data, included unmixing, is preceded by a dimensionality reduction step to reduce computational complexity and to improve the signal-to-noise ratio (SNR). Principal component analysis (PCA) [46], maximum noise fraction (MNF) [47], and singular value decomposition (SVD) [48] are three well-known projection techniques widely used in remote sensing in general and in unmixing in particular. The newly introduced method [49] exploits the structure of hyperspectral mixtures, namely the fact that spectral vectors are nonnegative. The computational complexity associated with these techniques is an obstacle to real-time implementations. To overcome this problem, band selection [50] and non-statistical [51] algorithms have been introduced. This chapter addresses hyperspectral data source dependence and its impact on ICA and IFA performances. The study consider simulated and real data and is based on mutual information minimization. Hyperspectral observations are described by a generative model. This model takes into account the degradation mechanisms normally found in hyperspectral applications—namely, signature variability [52–54], abundance constraints, topography modulation, and system noise. The computation of mutual information is based on fitting mixtures of Gaussians (MOG) to data. The MOG parameters (number of components, means, covariances, and weights) are inferred using the minimum description length (MDL) based algorithm [55]. We study the behavior of the mutual information as a function of the unmixing matrix. The conclusion is that the unmixing matrix minimizing the mutual information might be very far from the true one. Nevertheless, some abundance fractions might be well separated, mainly in the presence of strong signature variability, a large number of endmembers, and high SNR. We end this chapter by sketching a new methodology to blindly unmix hyperspectral data, where abundance fractions are modeled as a mixture of Dirichlet sources. This model enforces positivity and constant sum sources (full additivity) constraints. The mixing matrix is inferred by an expectation-maximization (EM)-type algorithm. This approach is in the vein of references 39 and 56, replacing independent sources represented by MOG with mixture of Dirichlet sources. Compared with the geometric-based approaches, the advantage of this model is that there is no need to have pure pixels in the observations. The chapter is organized as follows. Section 6.2 presents a spectral radiance model and formulates the spectral unmixing as a linear problem accounting for abundance constraints, signature variability, topography modulation, and system noise. Section 6.3 presents a brief resume of ICA and IFA algorithms. Section 6.4 illustrates the performance of IFA and of some well-known ICA algorithms with experimental data. Section 6.5 studies the ICA and IFA limitations in unmixing hyperspectral data. Section 6.6 presents results of ICA based on real data. Section 6.7 describes the new blind unmixing scheme and some illustrative examples. Section 6.8 concludes with some remarks.
Resumo:
Dissertação apresentada para obtenção do Grau de Doutor em Bioquímica pela Universidade Nova de Lisboa, Faculdade de Ciências e Tecnologia.A presente dissertação foi preparada no âmbito do convénio bilateral existente entre a Universidade Nova de Lisboa e a Universidade de Vigo.
Resumo:
Publicationes Mathematicae Debrecen
Resumo:
Submitted in partial fulfillment for the Requirements for the Degree of PhD in Mathematics, in the Speciality of Statistics in the Faculdade de Ciências e Tecnologia
Dimensão do sector público e crescimento económico: uma relação não linear na União Europeia dos 15?
Resumo:
Os Estados-Membros da União Europeia têm tido a preocupação de reduzirem a dimensão da Administração Pública na economia, a par de a tornar muito mais eficiente de forma a promover o crescimento económico. Neste artigo analisam-se as relações entre a despesa pública e o crescimento económico em 14 Estados-Membros da União Europeia dos 15, com o objectivo de determinar a dimensão óptima das Administrações Públicas, tendo por base teórica a Curva de Armey. Os resultados, para o período 1965-2007, sugerem uma dimensão do sector público maximizadora do crescimento económico de 47,37% e 22,17% do PIB, quando avaliada pelas despesas públicas totais e o consumo público, respectivamente.
Resumo:
This work measures and tries to compare the Antioxidant Capacity (AC) of 50 commercial beverages of different kinds: 6 wines, 12 beers, 18 soft drinks and 14 flavoured waters. Because there is no reference procedure established for this purpose, three different optical methods were used to analyse these samples: Total Radical trapping Antioxidant Parameter (TRAP), Trolox Equivalent Antioxidant Capacity (TEAC) and Ferric ion Reducing Antioxidant Parameter (FRAP). These methods differ on the chemical background and nature of redox system. The TRAP method involves the transfer of hydrogen atoms while TEAC and FRAP involves electron transfer reactions. The AC was also assessed against three antioxidants of reference, Ascorbic acid (AA), Gallic acid (GA) and 6-hydroxy-2,5,7,8-tetramethyl- 2-carboxylic acid (Trolox). The results obtained were analyzed statistically. Anova one-way tests were applied to all results and suggested that methods and standards exhibited significant statistical differences. The possible effect of sample features in the AC, such as gas, flavours, food colouring, sweeteners, acidity regulators, preservatives, stabilizers, vitamins, juice percentage, alcohol percentage, antioxidants and the colour was also investigated. The AC levels seemed to change with brand, kind of antioxidants added, and kind of flavour, depending on the sample. In general, higher ACs were obtained for FRAP as method, and beer for kind of sample, and the standard expressing the smaller AC values was GA.
Resumo:
Background: Little is known about the risk of progression to hazardous alcohol use in people currently drinking at safe limits. We aimed to develop a prediction model (predictAL) for the development of hazardous drinking in safe drinkers. Methods: A prospective cohort study of adult general practice attendees in six European countries and Chile followed up over 6 months. We recruited 10,045 attendees between April 2003 to February 2005. 6193 European and 2462 Chilean attendees recorded AUDIT scores below 8 in men and 5 in women at recruitment and were used in modelling risk. 38 risk factors were measured to construct a risk model for the development of hazardous drinking using stepwise logistic regression. The model was corrected for over fitting and tested in an external population. The main outcome was hazardous drinking defined by an AUDIT score >= 8 in men and >= 5 in women. Results: 69.0% of attendees were recruited, of whom 89.5% participated again after six months. The risk factors in the final predictAL model were sex, age, country, baseline AUDIT score, panic syndrome and lifetime alcohol problem. The predictAL model's average c-index across all six European countries was 0.839 (95% CI 0.805, 0.873). The Hedge's g effect size for the difference in log odds of predicted probability between safe drinkers in Europe who subsequently developed hazardous alcohol use and those who did not was 1.38 (95% CI 1.25, 1.51). External validation of the algorithm in Chilean safe drinkers resulted in a c-index of 0.781 (95% CI 0.717, 0.846) and Hedge's g of 0.68 (95% CI 0.57, 0.78). Conclusions: The predictAL risk model for development of hazardous consumption in safe drinkers compares favourably with risk algorithms for disorders in other medical settings and can be a useful first step in prevention of alcohol misuse.
Resumo:
New arguments proving that successive (repeated) measurements have a memory and actually remember each other are presented. The recognition of this peculiarity can change essentially the existing paradigm associated with conventional observation in behavior of different complex systems and lead towards the application of an intermediate model (IM). This IM can provide a very accurate fit of the measured data in terms of the Prony's decomposition. This decomposition, in turn, contains a small set of the fitting parameters relatively to the number of initial data points and allows comparing the measured data in cases where the “best fit” model based on some specific physical principles is absent. As an example, we consider two X-ray diffractometers (defined in paper as A- (“cheap”) and B- (“expensive”) that are used after their proper calibration for the measuring of the same substance (corundum a-Al2O3). The amplitude-frequency response (AFR) obtained in the frame of the Prony's decomposition can be used for comparison of the spectra recorded from (A) and (B) - X-ray diffractometers (XRDs) for calibration and other practical purposes. We prove also that the Fourier decomposition can be adapted to “ideal” experiment without memory while the Prony's decomposition corresponds to real measurement and can be fitted in the frame of the IM in this case. New statistical parameters describing the properties of experimental equipment (irrespective to their internal “filling”) are found. The suggested approach is rather general and can be used for calibration and comparison of different complex dynamical systems in practical purposes.
Resumo:
OBJECTIVE: The objective of the study was to develop a model for estimating patient 28-day in-hospital mortality using 2 different statistical approaches. DESIGN: The study was designed to develop an outcome prediction model for 28-day in-hospital mortality using (a) logistic regression with random effects and (b) a multilevel Cox proportional hazards model. SETTING: The study involved 305 intensive care units (ICUs) from the basic Simplified Acute Physiology Score (SAPS) 3 cohort. PATIENTS AND PARTICIPANTS: Patients (n = 17138) were from the SAPS 3 database with follow-up data pertaining to the first 28 days in hospital after ICU admission. INTERVENTIONS: None. MEASUREMENTS AND RESULTS: The database was divided randomly into 5 roughly equal-sized parts (at the ICU level). It was thus possible to run the model-building procedure 5 times, each time taking four fifths of the sample as a development set and the remaining fifth as the validation set. At 28 days after ICU admission, 19.98% of the patients were still in the hospital. Because of the different sampling space and outcome variables, both models presented a better fit in this sample than did the SAPS 3 admission score calibrated to vital status at hospital discharge, both on the general population and in major subgroups. CONCLUSIONS: Both statistical methods can be used to model the 28-day in-hospital mortality better than the SAPS 3 admission model. However, because the logistic regression approach is specifically designed to forecast 28-day mortality, and given the high uncertainty associated with the assumption of the proportionality of risks in the Cox model, the logistic regression approach proved to be superior.
Resumo:
Dissertação para obtenção do Grau de Mestre em Engenharia Electrotécnica e Computadores
Resumo:
INTRODUCTION: Methicillin-Resistant Staphylococcus aureus (MRSA) presenting reduced susceptibility to vancomycin has been associated to therapeutic failure. Some methods used by clinical laboratories may not be sufficiently accurate to detect this phenotype, compromising results and the outcome of the patient. OBJECTIVES: To evaluate the performance of methods in the detection of vancomycin MIC values among clinical isolates of MRSA. MATERIAL AND METHODS: The Vancomycin Minimal Inhibitory Concentration was determined for 75 MRSA isolates from inpatients of Mãe de Deus Hospital, Porto Alegre, Brazil. The broth microdilution (BM) was used as the gold-standard technique, as well as the following methods: E-test® strips (BioMérieux), M.I.C.E® strips (Oxoid), PROBAC® commercial panel and the automated system MicroScan® (Siemens). Besides, the agar screening test was carried out with 3 µg/mL of vancomycin. RESULTS: All isolates presented MIC ≤ 2 µg/mL for BM. E-test® had higher concordance (40%) in terms of global agreement with the gold standard, and there was not statistical difference among E-test® and broth microdilution results. PROBAC® panels presented MICs, in general, lower than the gold-standard panels (58.66% major errors), while M.I.C.E.® MICs were higher (67.99% minor errors). CONCLUSIONS: For the population of MRSA in question, E-test® presented the best performance, although with a heterogeneous accuracy, depending on MIC values.
Resumo:
Dissertação apresentada para obtenção do Grau de Doutor em Engenharia do Ambiente, pela Universidade Nova de Lisboa, Faculdade de Ciências e Tecnologia