944 resultados para GIS data and services


Relevância:

100.00% 100.00%

Publicador:

Resumo:

OBJECTIVE: To estimate the prevalence and analyze risk factors associated to osteoporosis and low-trauma fracture in women. METHODS: Cross-sectional study including a total of 4,332 women older than 40 attending primary care services in the Greater São Paulo, Southeastern Brazil, between 2004 and 2007. Anthropometrical and gynecological data and information about lifestyle habits, previous fracture, medical history, food intake and physical activity were obtained through individual quantitative interviews. Low-trauma fracture was defined as that resulting from a fall from standing height or less in individuals 50 years or older. Multiple logistic regression models were designed having osteoporotic fracture and bone mineral density (BMD) as the dependent variables and all other parameters as the independent ones. The significance level was set at p<0.05. RESULTS: The prevalence of osteoporosis and osteoporotic fractures was 33% and 11.5%, respectively. The main risk factors associated with low bone mass were age (OR=1.07; 95% CI: 1.06;1.08), time since menopause (OR=2.16; 95% CI: 1.49;3.14), previous fracture (OR=2.62; 95% CI: 2.08;3.29) and current smoking (OR=1.45; 95% CI: 1.13;1.85). BMI (OR=0.88; 95% CI: 0.86;0.89), regular physical activity (OR=0.78; 95% CI: 0.65;0.94) and hormone replacement therapy (OR=0.43; 95% CI: 0.33;0.56) had a protective effect on bone mass. Risk factors significantly associated with osteoporotic fractures were age (OR=1.05; 95% CI: 1.04;1.06), time since menopause (OR=4.12; 95% CI: 1.79;9.48), familial history of hip fracture (OR=3.59; 95% CI: 2.88;4.47) and low BMD (OR=2.28; 95% CI: 1.85;2.82). CONCLUSIONS: Advanced age, menopause, low-trauma fracture and current smoking are major risk factors associated with low BMD and osteoporotic fracture. The clinical use of these parameters to identify women at higher risk for fractures might be a reasonable strategy to improve the management of osteoporosis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The goal of the this paper is to show that the DGPS data Internet service we designed and developed provides campus-wide real time access to Differential GPS (DGPS) data and, thus, supports precise outdoor navigation. First we describe the developed distributed system in terms of architecture (a three tier client/server application), services provided (real time DGPS data transportation from remote DGPS sources and campus wide data dissemination) and transmission modes implemented (raw and frame mode over TCP and UDP). Then we present and discuss the results obtained and, finally, we draw some conclusions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

O aumento de tecnologias disponíveis na Web favoreceu o aparecimento de diversas formas de informação, recursos e serviços. Este aumento aliado à constante necessidade de formação e evolução das pessoas, quer a nível pessoal como profissional, incentivou o desenvolvimento área de sistemas de hipermédia adaptativa educacional - SHAE. Estes sistemas têm a capacidade de adaptar o ensino consoante o modelo do aluno, características pessoais, necessidades, entre outros aspetos. Os SHAE permitiram introduzir mudanças relativamente à forma de ensino, passando do ensino tradicional que se restringia apenas ao uso de livros escolares até à utilização de ferramentas informáticas que através do acesso à internet disponibilizam material didático, privilegiando o ensino individualizado. Os SHAE geram grande volume de dados, informação contida no modelo do aluno e todos os dados relativos ao processo de aprendizagem de cada aluno. Facilmente estes dados são ignorados e não se procede a uma análise cuidada que permita melhorar o conhecimento do comportamento dos alunos durante o processo de ensino, alterando a forma de aprendizagem de acordo com o aluno e favorecendo a melhoria dos resultados obtidos. O objetivo deste trabalho foi selecionar e aplicar algumas técnicas de Data Mining a um SHAE, PCMAT - Mathematics Collaborative Educational System. A aplicação destas técnicas deram origem a modelos de dados que transformaram os dados em informações úteis e compreensíveis, essenciais para a geração de novos perfis de alunos, padrões de comportamento de alunos, regras de adaptação e pedagógicas. Neste trabalho foram criados alguns modelos de dados recorrendo à técnica de Data Mining de classificação, abordando diferentes algoritmos. Os resultados obtidos permitirão definir novas regras de adaptação e padrões de comportamento dos alunos, poderá melhorar o processo de aprendizagem disponível num SHAE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Over the past few decades there has been some discussion concerning the increase of the natural background radiation originated by coal-fired power plants, due to the uranium and thorium content present in combustion ashes. The radioactive decay products of uranium and thorium, such as radium, radon, polonium, bismuth and lead, are also released in addition to a significant amount of 40K. Since the measurement of radioactive elements released by the gaseous emissions of coal power plants is not compulsory, there is a gap of information concerning this situation. Consequently, the prediction of dispersion and mobility of these elements in the environment, after their release, is based on limited data and the radiological impact from the exposure to these radioactive elements is unknown. This paper describes the methodology that is being developed to assess the radiological impact due to the raise in the natural background radiation level originated by the release and dispersion of the emitted radionuclides. The current investigation is part of a research project that is undergoing in the vicinity of Sines coal-fired power plant (south of Portugal) until 2013. Data from preliminary stages are already available and possible of interpretation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The conjugate margins system of the Gulf of Lion and West Sardinia (GLWS) represents a unique natural laboratory for addressing fundamental questions about rifting due to its landlocked situation, its youth, its thick sedimentary layers, including prominent palaeo-marker such as the MSC event, and the amount of available data and multidisciplinary studies. The main goals of the SARDINIA experiment, were to (i) investigate the deep structure of the entire system within the two conjugate margins: the Gulf of Lion and West Sardinia, (ii) characterize the nature of the crust, and (iii) define the geometry of the basin and provide important constrains on its genesis. This paper presents the results of P-wave velocity modelling on three coincident near-vertical reflection multi-channel seismic (MCS) and wide-angle seismic profiles acquired in the Gulf of Lion, to a depth of 35 km. A companion paper [part II Afilhado et al., 2015] addresses the results of two other SARDINIA profiles located on the oriental conjugate West Sardinian margin. Forward wide-angle modelling of both data sets confirms that the margin is characterised by three distinct domains following the onshore unthinned, 33 km-thick continental crust domain: Domain I is bounded by two necking zones, where the crust thins respectively from 30 to 20 and from 20 to 7 km over a width of about 170 km; the outermost necking is imprinted by the well-known T-reflector at its crustal base; Domain II is characterised by a 7 km-thick crust with anomalous velocities ranging from 6 to 7.5 km/s; it represents the transition between the thinned continental crust (Domain I) and a very thin (only 4-5 km) "atypical" oceanic crust (Domain III). In Domain II, the hypothesis of the presence of exhumed mantle is falsified by our results: this domain may likely consist of a thin exhumed lower continental crust overlying a heterogeneous, intruded lower layer. Moreover, despite the difference in their magnetic signatures, Domains II and III present the very similar seismic velocities profiles, and we discuss the possibility of a connection between these two different domains.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The rapidly increasing computing power, available storage and communication capabilities of mobile devices makes it possible to start processing and storing data locally, rather than offloading it to remote servers; allowing scenarios of mobile clouds without infrastructure dependency. We can now aim at connecting neighboring mobile devices, creating a local mobile cloud that provides storage and computing services on local generated data. In this paper, we describe an early overview of a distributed mobile system that allows accessing and processing of data distributed across mobile devices without an external communication infrastructure. Copyright © 2015 ICST.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Apresentação realizada no OH&S Forum 2011 - International Forum on Occupational Health and Safety: Policies, profiles and services, na Finlândia de, 20 a 22 Junho de 2011.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dissertação para obtenção do grau de Mestre em Engenharia Civil na Área de Especialização em Vias de Comunicação e Transportes

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The development of high spatial resolution airborne and spaceborne sensors has improved the capability of ground-based data collection in the fields of agriculture, geography, geology, mineral identification, detection [2, 3], and classification [4–8]. The signal read by the sensor from a given spatial element of resolution and at a given spectral band is a mixing of components originated by the constituent substances, termed endmembers, located at that element of resolution. This chapter addresses hyperspectral unmixing, which is the decomposition of the pixel spectra into a collection of constituent spectra, or spectral signatures, and their corresponding fractional abundances indicating the proportion of each endmember present in the pixel [9, 10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. The linear mixing model holds when the mixing scale is macroscopic [13]. The nonlinear model holds when the mixing scale is microscopic (i.e., intimate mixtures) [14, 15]. The linear model assumes negligible interaction among distinct endmembers [16, 17]. The nonlinear model assumes that incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [18]. Under the linear mixing model and assuming that the number of endmembers and their spectral signatures are known, hyperspectral unmixing is a linear problem, which can be addressed, for example, under the maximum likelihood setup [19], the constrained least-squares approach [20], the spectral signature matching [21], the spectral angle mapper [22], and the subspace projection methods [20, 23, 24]. Orthogonal subspace projection [23] reduces the data dimensionality, suppresses undesired spectral signatures, and detects the presence of a spectral signature of interest. The basic concept is to project each pixel onto a subspace that is orthogonal to the undesired signatures. As shown in Settle [19], the orthogonal subspace projection technique is equivalent to the maximum likelihood estimator. This projection technique was extended by three unconstrained least-squares approaches [24] (signature space orthogonal projection, oblique subspace projection, target signature space orthogonal projection). Other works using maximum a posteriori probability (MAP) framework [25] and projection pursuit [26, 27] have also been applied to hyperspectral data. In most cases the number of endmembers and their signatures are not known. Independent component analysis (ICA) is an unsupervised source separation process that has been applied with success to blind source separation, to feature extraction, and to unsupervised recognition [28, 29]. ICA consists in finding a linear decomposition of observed data yielding statistically independent components. Given that hyperspectral data are, in given circumstances, linear mixtures, ICA comes to mind as a possible tool to unmix this class of data. In fact, the application of ICA to hyperspectral data has been proposed in reference 30, where endmember signatures are treated as sources and the mixing matrix is composed by the abundance fractions, and in references 9, 25, and 31–38, where sources are the abundance fractions of each endmember. In the first approach, we face two problems: (1) The number of samples are limited to the number of channels and (2) the process of pixel selection, playing the role of mixed sources, is not straightforward. In the second approach, ICA is based on the assumption of mutually independent sources, which is not the case of hyperspectral data, since the sum of the abundance fractions is constant, implying dependence among abundances. This dependence compromises ICA applicability to hyperspectral images. In addition, hyperspectral data are immersed in noise, which degrades the ICA performance. IFA [39] was introduced as a method for recovering independent hidden sources from their observed noisy mixtures. IFA implements two steps. First, source densities and noise covariance are estimated from the observed data by maximum likelihood. Second, sources are reconstructed by an optimal nonlinear estimator. Although IFA is a well-suited technique to unmix independent sources under noisy observations, the dependence among abundance fractions in hyperspectral imagery compromises, as in the ICA case, the IFA performance. Considering the linear mixing model, hyperspectral observations are in a simplex whose vertices correspond to the endmembers. Several approaches [40–43] have exploited this geometric feature of hyperspectral mixtures [42]. Minimum volume transform (MVT) algorithm [43] determines the simplex of minimum volume containing the data. The MVT-type approaches are complex from the computational point of view. Usually, these algorithms first find the convex hull defined by the observed data and then fit a minimum volume simplex to it. Aiming at a lower computational complexity, some algorithms such as the vertex component analysis (VCA) [44], the pixel purity index (PPI) [42], and the N-FINDR [45] still find the minimum volume simplex containing the data cloud, but they assume the presence in the data of at least one pure pixel of each endmember. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. Hyperspectral sensors collects spatial images over many narrow contiguous bands, yielding large amounts of data. For this reason, very often, the processing of hyperspectral data, included unmixing, is preceded by a dimensionality reduction step to reduce computational complexity and to improve the signal-to-noise ratio (SNR). Principal component analysis (PCA) [46], maximum noise fraction (MNF) [47], and singular value decomposition (SVD) [48] are three well-known projection techniques widely used in remote sensing in general and in unmixing in particular. The newly introduced method [49] exploits the structure of hyperspectral mixtures, namely the fact that spectral vectors are nonnegative. The computational complexity associated with these techniques is an obstacle to real-time implementations. To overcome this problem, band selection [50] and non-statistical [51] algorithms have been introduced. This chapter addresses hyperspectral data source dependence and its impact on ICA and IFA performances. The study consider simulated and real data and is based on mutual information minimization. Hyperspectral observations are described by a generative model. This model takes into account the degradation mechanisms normally found in hyperspectral applications—namely, signature variability [52–54], abundance constraints, topography modulation, and system noise. The computation of mutual information is based on fitting mixtures of Gaussians (MOG) to data. The MOG parameters (number of components, means, covariances, and weights) are inferred using the minimum description length (MDL) based algorithm [55]. We study the behavior of the mutual information as a function of the unmixing matrix. The conclusion is that the unmixing matrix minimizing the mutual information might be very far from the true one. Nevertheless, some abundance fractions might be well separated, mainly in the presence of strong signature variability, a large number of endmembers, and high SNR. We end this chapter by sketching a new methodology to blindly unmix hyperspectral data, where abundance fractions are modeled as a mixture of Dirichlet sources. This model enforces positivity and constant sum sources (full additivity) constraints. The mixing matrix is inferred by an expectation-maximization (EM)-type algorithm. This approach is in the vein of references 39 and 56, replacing independent sources represented by MOG with mixture of Dirichlet sources. Compared with the geometric-based approaches, the advantage of this model is that there is no need to have pure pixels in the observations. The chapter is organized as follows. Section 6.2 presents a spectral radiance model and formulates the spectral unmixing as a linear problem accounting for abundance constraints, signature variability, topography modulation, and system noise. Section 6.3 presents a brief resume of ICA and IFA algorithms. Section 6.4 illustrates the performance of IFA and of some well-known ICA algorithms with experimental data. Section 6.5 studies the ICA and IFA limitations in unmixing hyperspectral data. Section 6.6 presents results of ICA based on real data. Section 6.7 describes the new blind unmixing scheme and some illustrative examples. Section 6.8 concludes with some remarks.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Linear unmixing decomposes an hyperspectral image into a collection of re ectance spectra, called endmember signatures, and a set corresponding abundance fractions from the respective spatial coverage. This paper introduces vertex component analysis, an unsupervised algorithm to unmix linear mixtures of hyperpsectral data. VCA exploits the fact that endmembers occupy vertices of a simplex, and assumes the presence of pure pixels in data. VCA performance is illustrated using simulated and real data. VCA competes with state-of-the-art methods with much lower computational complexity.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dissertação apresentada como requisito parcial para obtenção do grau de Mestre em Ciência e Sistemas de Informação Geográfica

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper studies the statistical distributions of worldwide earthquakes from year 1963 up to year 2012. A Cartesian grid, dividing Earth into geographic regions, is considered. Entropy and the Jensen–Shannon divergence are used to analyze and compare real-world data. Hierarchical clustering and multi-dimensional scaling techniques are adopted for data visualization. Entropy-based indices have the advantage of leading to a single parameter expressing the relationships between the seismic data. Classical and generalized (fractional) entropy and Jensen–Shannon divergence are tested. The generalized measures lead to a clear identification of patterns embedded in the data and contribute to better understand earthquake distributions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dissertation submitted in partial fulfillment of the requirements for the Degree of Master of Science in Geospatial Technologies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dissertation submitted in partial fulfillment of the requirements for the Degree of Master of Science in Geospatial Technologies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dissertation submitted in partial fulfillment of the requirements for the Degree of Master of Science in Geospatial Technologies.