959 resultados para generalized canonical correlation analysis
Resumo:
This paper reports on the analysis of tidal breathing patterns measured during noninvasive forced oscillation lung function tests in six individual groups. The three adult groups were healthy, with prediagnosed chronic obstructive pulmonary disease, and with prediagnosed kyphoscoliosis, respectively. The three children groups were healthy, with prediagnosed asthma, and with prediagnosed cystic fibrosis, respectively. The analysis is applied to the pressure–volume curves and the pseudophaseplane loop by means of the box-counting method, which gives a measure of the area within each loop. The objective was to verify if there exists a link between the area of the loops, power-law patterns, and alterations in the respiratory structure with disease. We obtained statistically significant variations between the data sets corresponding to the six groups of patients, showing also the existence of power-law patterns. Our findings support the idea that the respiratory system changes with disease in terms of airway geometry and tissue parameters, leading, in turn, to variations in the fractal dimension of the respiratory tree and its dynamics.
Resumo:
We propose a graphical method to visualize possible time-varying correlations between fifteen stock market values. The method is useful for observing stable or emerging clusters of stock markets with similar behaviour. The graphs, originated from applying multidimensional scaling techniques (MDS), may also guide the construction of multivariate econometric models.
Resumo:
This paper presents the Pseudo phase plane (PPP) method for detecting the existence of a nanofilm on the nitroazobenzene-modified glassy carbon electrode (NAB-GC) system. This modified electrode systems and nitroazobenze-nanofilm were prepared by the electrochemical reduction of diazonium salt of NAB at the glassy carbon electrodes (GCE) in nonaqueous media. The IR spectra of the bare glassy carbon electrodes (GCE), the NAB-GC electrode system and the organic NAB film were recorded. The IR data of the bare GC, NAB-GC and NAB film were categorized into five series consisting of FILM1, GC-NAB1, GC1; FILM2, GC-NAB2, GC2; FILM3, GC-NAB3, GC3 and FILM4, GC-NAB4, GC4 respectively. The PPP approach was applied to each group of the data of unmodified and modified electrode systems with nanofilm. The results provided by PPP method show the existence of the NAB film on the modified GC electrode.
Resumo:
This paper studies the impact of energy and stock markets upon electricity markets using Multidimensional Scaling (MDS). Historical values from major energy, stock and electricity markets are adopted. To analyze the data several graphs produced by MDS are presented and discussed. This method is useful to have a deeper insight into the behavior and the correlation of the markets. The results may also guide the construction models, helping electricity markets agents hedging against Market Clearing Price (MCP) volatility and, simultaneously, to achieve better financial results.
Resumo:
OBJECTIVE To evaluate the individual and contextual determinants of the use of health care services in the metropolitan region of Sao Paulo.METHODS Data from the Sao Paulo Megacity study – the Brazilian version of the World Mental Health Survey multicenter study – were used. A total of 3,588 adults living in 69 neighborhoods in the metropolitan region of Sao Paulo, SP, Southeastern Brazil, including 38 municipalities and 31 neighboring districts, were selected using multistratified sampling of the non-institutionalized population. Multilevel Bayesian logistic models were adjusted to identify the individual and contextual determinants of the use of health care services in the past 12 months and presence of a regular physician for routine care.RESULTS The contextual characteristics of the place of residence (income inequality, violence, and median income) showed no significant correlation (p > 0.05) with the use of health care services or with the presence of a regular physician for routine care. The only exception was the negative correlation between living in areas with high income inequality and presence of a regular physician (OR: 0.77; 95%CI 0.60;0.99) after controlling for individual characteristics. The study revealed a strong and consistent correlation between individual characteristics (mainly education and possession of health insurance), use of health care services, and presence of a regular physician. Presence of chronic and mental illnesses was strongly correlated with the use of health care services in the past year (regardless of the individual characteristics) but not with the presence of a regular physician.CONCLUSIONS Individual characteristics including higher education and possession of health insurance were important determinants of the use of health care services in the metropolitan area of Sao Paulo. A better understanding of these determinants is essential for the development of public policies that promote equitable use of health care services.
Resumo:
This work extends a recent comparative study covering four different courses lectured at the Polytechnic of Porto - School of Engineering, in respect to the usage of a particular Learning Management System, i.e. Moodle, and its impact on students' results. A fifth course, which includes a number of resources especially supporting laboratory classes, is now added to the analysis. This particular course includes a number of remote experiments, made available through VISIR (Virtual Instrument Systems in Reality) and directly accessible through links included in the Moodle course page. We have analyzed the students' behavior in following these links and in effectively running experiments in VISIR (and also using other lab related resources, in Moodle). This data have been correlated with students' classifications in the lab component and in the exam, each one weighting 50% of their final marks. We aimed to compare students' performance in a richly Moodle-supported environment (with lab component) and in a poorly Moodle-supported environment (with only theoretical component). This question followed from conclusions drawn in the above referred comparative study, where it was shown that even though a positive correlation factor existed between the number of Moodle accesses and the final exam grade obtained by each student, its explanation behind was not straightforward, as the quality of the resources was preponderant over its quantity.
Resumo:
A dynamical approach to study the behaviour of generalized populational growth models from Bets(p, 2) densities, with strong Allee effect, is presented. The dynamical analysis of the respective unimodal maps is performed using symbolic dynamics techniques. The complexity of the correspondent discrete dynamical systems is measured in terms of topological entropy. Different populational dynamics regimes are obtained when the intrinsic growth rates are modified: extinction, bistability, chaotic semistability and essential extinction.
Resumo:
Measurements in civil engineering load tests usually require considerable time and complex procedures. Therefore, measurements are usually constrained by the number of sensors resulting in a restricted monitored area. Image processing analysis is an alternative way that enables the measurement of the complete area of interest with a simple and effective setup. In this article photo sequences taken during load displacement tests were captured by a digital camera and processed with image correlation algorithms. Three different image processing algorithms were used with real images taken from tests using specimens of PVC and Plexiglas. The data obtained from the image processing algorithms were also compared with the data from physical sensors. A complete displacement and strain map were obtained. Results show that the accuracy of the measurements obtained by photogrammetry is equivalent to that from the physical sensors but with much less equipment and fewer setup requirements. © 2015Computer-Aided Civil and Infrastructure Engineering.
Resumo:
This paper describes a high-resolution stratigraphic correlation scheme for the early to middle Miocene Lagos-Portimão Formation of central Algarve, southern Portugal. The Lagos Portimão-Formation of central Algarve is a 60 m thick package of horizontally bedded siliciclastics and carbonates. The bryozoan and mollusc dominated biofacies is typical of a shallow marine, warm-temperate climatic environment. We define four stratigraphic marker beds based on biofacies, lithology, and gamma-ray signatures. Marker bed 1 is a reddish shell bed composed predominantly of bivalve shells in various stages of fragmentation. Marker bed 2 is a fossiliferous sandstone / sandy rudstone characterized by bryozoan masses. Marker bed 3 is also a fossiliferous sandstone with abundant larger foraminifers and foliate bryozoans. Marker bed 4 is composed of three distinct layers; two fossiliferous sandstones with an intercalated shell bed. The upper sandstone unit displays thickets of the bryozoan Celleporaria palmate associated with the coral Culizia parasitica. This stratigraphic framework allows to correlate isolated outcrops within the stratigraphic context of the Lagos-Portimão Formation and to establish high resolution chronostratigraphic Sr-isotopic dating.
Resumo:
This paper reports on the analysis of tidal breathing patterns measured during noninvasive forced oscillation lung function tests in six individual groups. The three adult groups were healthy, with prediagnosed chronic obstructive pulmonary disease, and with prediagnosed kyphoscoliosis, respectively. The three children groups were healthy, with prediagnosed asthma, and with prediagnosed cystic fibrosis, respectively. The analysis is applied to the pressure-volume curves and the pseudophase-plane loop by means of the box-counting method, which gives a measure of the area within each loop. The objective was to verify if there exists a link between the area of the loops, power-law patterns, and alterations in the respiratory structure with disease. We obtained statistically significant variations between the data sets corresponding to the six groups of patients, showing also the existence of power-law patterns. Our findings support the idea that the respiratory system changes with disease in terms of airway geometry and tissue parameters, leading, in turn, to variations in the fractal dimension of the respiratory tree and its dynamics.
Resumo:
The solubilities of two C-tetraalkylcalix[4]resorcinarenes, namely C-tetramethylcalix[4]resorcinarene and C-tetrapentylcalix[4]resorcinarene, in supercritical carbon dioxide (SCCO2) were measured in a flow-type apparatus at a temperature range from (313.2 to 333.2) K and at pressures from (12.0 to 35.0) MPa. The C-tetraalkylcalix[4]resorcinarenes were synthesized applying our optimized procedure and fully characterized by means of gel permeation chromatography, infrared and nuclear magnetic resonance spectroscopy. The solubilities of the C-tetraalkylcalix[4]resorcinarenes in SCCO2 were determined by analysis of the extracts obtained by HPLC with ultraviolet (UV) detection methodology adapted by our team. Four semiempirical density-based models, and the SoaveRedlichKwong cubic equation of state (SRK CEoS) with classical mixing rules, were applied to correlate the solubility of the calix[4]resorcinarenes in the SC CO2. The physical properties required for the modeling were estimated and reported.
Resumo:
Hyperspectral remote sensing exploits the electromagnetic scattering patterns of the different materials at specific wavelengths [2, 3]. Hyperspectral sensors have been developed to sample the scattered portion of the electromagnetic spectrum extending from the visible region through the near-infrared and mid-infrared, in hundreds of narrow contiguous bands [4, 5]. The number and variety of potential civilian and military applications of hyperspectral remote sensing is enormous [6, 7]. Very often, the resolution cell corresponding to a single pixel in an image contains several substances (endmembers) [4]. In this situation, the scattered energy is a mixing of the endmember spectra. A challenging task underlying many hyperspectral imagery applications is then decomposing a mixed pixel into a collection of reflectance spectra, called endmember signatures, and the corresponding abundance fractions [8–10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. Linear mixing model holds approximately when the mixing scale is macroscopic [13] and there is negligible interaction among distinct endmembers [3, 14]. If, however, the mixing scale is microscopic (or intimate mixtures) [15, 16] and the incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [17], the linear model is no longer accurate. Linear spectral unmixing has been intensively researched in the last years [9, 10, 12, 18–21]. It considers that a mixed pixel is a linear combination of endmember signatures weighted by the correspondent abundance fractions. Under this model, and assuming that the number of substances and their reflectance spectra are known, hyperspectral unmixing is a linear problem for which many solutions have been proposed (e.g., maximum likelihood estimation [8], spectral signature matching [22], spectral angle mapper [23], subspace projection methods [24,25], and constrained least squares [26]). In most cases, the number of substances and their reflectances are not known and, then, hyperspectral unmixing falls into the class of blind source separation problems [27]. Independent component analysis (ICA) has recently been proposed as a tool to blindly unmix hyperspectral data [28–31]. ICA is based on the assumption of mutually independent sources (abundance fractions), which is not the case of hyperspectral data, since the sum of abundance fractions is constant, implying statistical dependence among them. This dependence compromises ICA applicability to hyperspectral images as shown in Refs. [21, 32]. In fact, ICA finds the endmember signatures by multiplying the spectral vectors with an unmixing matrix, which minimizes the mutual information among sources. If sources are independent, ICA provides the correct unmixing, since the minimum of the mutual information is obtained only when sources are independent. This is no longer true for dependent abundance fractions. Nevertheless, some endmembers may be approximately unmixed. These aspects are addressed in Ref. [33]. Under the linear mixing model, the observations from a scene are in a simplex whose vertices correspond to the endmembers. Several approaches [34–36] have exploited this geometric feature of hyperspectral mixtures [35]. Minimum volume transform (MVT) algorithm [36] determines the simplex of minimum volume containing the data. The method presented in Ref. [37] is also of MVT type but, by introducing the notion of bundles, it takes into account the endmember variability usually present in hyperspectral mixtures. The MVT type approaches are complex from the computational point of view. Usually, these algorithms find in the first place the convex hull defined by the observed data and then fit a minimum volume simplex to it. For example, the gift wrapping algorithm [38] computes the convex hull of n data points in a d-dimensional space with a computational complexity of O(nbd=2cþ1), where bxc is the highest integer lower or equal than x and n is the number of samples. The complexity of the method presented in Ref. [37] is even higher, since the temperature of the simulated annealing algorithm used shall follow a log( ) law [39] to assure convergence (in probability) to the desired solution. Aiming at a lower computational complexity, some algorithms such as the pixel purity index (PPI) [35] and the N-FINDR [40] still find the minimum volume simplex containing the data cloud, but they assume the presence of at least one pure pixel of each endmember in the data. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. PPI algorithm uses the minimum noise fraction (MNF) [41] as a preprocessing step to reduce dimensionality and to improve the signal-to-noise ratio (SNR). The algorithm then projects every spectral vector onto skewers (large number of random vectors) [35, 42,43]. The points corresponding to extremes, for each skewer direction, are stored. A cumulative account records the number of times each pixel (i.e., a given spectral vector) is found to be an extreme. The pixels with the highest scores are the purest ones. N-FINDR algorithm [40] is based on the fact that in p spectral dimensions, the p-volume defined by a simplex formed by the purest pixels is larger than any other volume defined by any other combination of pixels. This algorithm finds the set of pixels defining the largest volume by inflating a simplex inside the data. ORA SIS [44, 45] is a hyperspectral framework developed by the U.S. Naval Research Laboratory consisting of several algorithms organized in six modules: exemplar selector, adaptative learner, demixer, knowledge base or spectral library, and spatial postrocessor. The first step consists in flat-fielding the spectra. Next, the exemplar selection module is used to select spectral vectors that best represent the smaller convex cone containing the data. The other pixels are rejected when the spectral angle distance (SAD) is less than a given thresh old. The procedure finds the basis for a subspace of a lower dimension using a modified Gram–Schmidt orthogonalizati on. The selected vectors are then projected onto this subspace and a simplex is found by an MV T pro cess. ORA SIS is oriented to real-time target detection from uncrewed air vehicles using hyperspectral data [46]. In this chapter we develop a new algorithm to unmix linear mixtures of endmember spectra. First, the algorithm determines the number of endmembers and the signal subspace using a newly developed concept [47, 48]. Second, the algorithm extracts the most pure pixels present in the data. Unlike other methods, this algorithm is completely automatic and unsupervised. To estimate the number of endmembers and the signal subspace in hyperspectral linear mixtures, the proposed scheme begins by estimating sign al and noise correlation matrices. The latter is based on multiple regression theory. The signal subspace is then identified by selectin g the set of signal eigenvalue s that best represents the data, in the least-square sense [48,49 ], we note, however, that VCA works with projected and with unprojected data. The extraction of the end members exploits two facts: (1) the endmembers are the vertices of a simplex and (2) the affine transformation of a simplex is also a simplex. As PPI and N-FIND R algorithms, VCA also assumes the presence of pure pixels in the data. The algorithm iteratively projects data on to a direction orthogonal to the subspace spanned by the endmembers already determined. The new end member signature corresponds to the extreme of the projection. The algorithm iterates until all end members are exhausted. VCA performs much better than PPI and better than or comparable to N-FI NDR; yet it has a computational complexity between on e and two orders of magnitude lower than N-FINDR. The chapter is structure d as follows. Section 19.2 describes the fundamentals of the proposed method. Section 19.3 and Section 19.4 evaluate the proposed algorithm using simulated and real data, respectively. Section 19.5 presents some concluding remarks.
Resumo:
This paper introduces a new method to blindly unmix hyperspectral data, termed dependent component analysis (DECA). This method decomposes a hyperspectral images into a collection of reflectance (or radiance) spectra of the materials present in the scene (endmember signatures) and the corresponding abundance fractions at each pixel. DECA assumes that each pixel is a linear mixture of the endmembers signatures weighted by the correspondent abundance fractions. These abudances are modeled as mixtures of Dirichlet densities, thus enforcing the constraints on abundance fractions imposed by the acquisition process, namely non-negativity and constant sum. The mixing matrix is inferred by a generalized expectation-maximization (GEM) type algorithm. This method overcomes the limitations of unmixing methods based on Independent Component Analysis (ICA) and on geometrical based approaches. The effectiveness of the proposed method is illustrated using simulated data based on U.S.G.S. laboratory spectra and real hyperspectral data collected by the AVIRIS sensor over Cuprite, Nevada.
Resumo:
Presented at Faculdade de Ciências e Tecnologias, Universidade de Lisboa, to obtain the Master Degree in Conservation and Restoration of Textiles
Resumo:
Three GST fusion recombinant antigen of Treponema pallidum, described as GST-rTp47, GST-rTp17 and GST-rTp15 were analyzed by Western blotting techniques. We have tested 53 serum samples: 25 from patients at different clinical stages of syphilis, all of them presenting anti-treponemal antibody, 25 from healthy blood donors and three from patients with sexually transmitted disease (STD) other than syphilis. Almost all samples from patients with syphilis presented a strong reactivity with GST-rTp17 antigen. Some samples were non-reactive or showed a weak reaction with GST-rTp47 and/or GST-rTp15, and apparently there was no correlation with the stage of disease. There was no seropositivity among blood donors. No sample reacted with purified GST. We concluded that due to their specificity these recombinant antigens can be used as GST fusion protein for development of syphilis diagnostic assays.