27 resultados para Spatial Research


Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents a spatial econometrics analysis for the number of road accidents with victims in the smallest administrative divisions of Lisbon, considering as a baseline a log-Poisson model for environmental factors. Spatial correlation on data is investigated for data alone and for the residuals of the baseline model without and with spatial-autocorrelated and spatial-lagged terms. In all the cases no spatial autocorrelation was detected.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Introduction: The purpose of this review is to gather and analyse current research publications to evaluate Sinogram-Affirmed Iterative Reconstruction (SAFIRE). The aim of this review is to investigate whether this algorithm is capable of reducing the dose delivered during CT imaging while maintaining image quality. Recent research shows that children have a greater risk per unit dose due to increased radiosensitivity and longer life expectancies, which means it is particularly important to reduce the radiation dose received by children. Discussion: Recent publications suggest that SAFIRE is capable of reducing image noise in CT images, thereby enabling the potential to reduce dose. Some publications suggest a decrease in dose, by up to 64% compared to filtered back projection, can be accomplished without a change in image quality. However, literature suggests that using a higher SAFIRE strength may alter the image texture, creating an overly ‘smoothed’ image that lacks contrast. Some literature reports SAFIRE gives decreased low contrast detectability as well as spatial resolution. Publications tend to agree that SAFIRE strength three is optimal for an acceptable level of visual image quality, but more research is required. The importance of creating a balance between dose reduction and image quality is stressed. In this literature review most of the publications were completed using adults or phantoms, and a distinct lack of literature for paediatric patients is noted. Conclusion: It is necessary to find an optimal way to balance dose reduction and image quality. More research relating to SAFIRE and paediatric patients is required to fully investigate dose reduction potential in this population, for a range of different SAFIRE strengths.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Coastal low-level jets (CLLJ) are a low-tropospheric wind feature driven by the pressure gradient produced by a sharp contrast between high temperatures over land and lower temperatures over the sea. This contrast between the cold ocean and the warm land in the summer is intensified by the impact of the coastal parallel winds on the ocean generating upwelling currents, sharpening the temperature gradient close to the coast and giving rise to strong baroclinic structures at the coast. During summertime, the Iberian Peninsula is often under the effect of the Azores High and of a thermal low pressure system inland, leading to a seasonal wind, in the west coast, called the Nortada (northerly wind). This study presents a regional climatology of the CLLJ off the west coast of the Iberian Peninsula, based on a 9km resolution downscaling dataset, produced using the Weather Research and Forecasting (WRF) mesoscale model, forced by 19 years of ERA-Interim reanalysis (1989-2007). The simulation results show that the jet hourly frequency of occurrence in the summer is above 30% and decreases to about 10% during spring and autumn. The monthly frequencies of occurrence can reach higher values, around 40% in summer months, and reveal large inter-annual variability in all three seasons. In the summer, at a daily base, the CLLJ is present in almost 70% of the days. The CLLJ wind direction is mostly from north-northeasterly and occurs more persistently in three areas where the interaction of the jet flow with local capes and headlands is more pronounced. The coastal jets in this area occur at heights between 300 and 400 m, and its speed has a mean around 15 m/s, reaching maximum speeds of 25 m/s.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents the recent research results about the development of a Observed Time Difference (OTD) based geolocation algorithm based on network trace data, for a real Universal Mobile Telecommunication System (UMTS) Network. The initial results have been published in [1], the current paper focus on increasing the sample convergence rate, and introducing a new filtering approach based on a moving average spatial filter, to increase accuracy. Field tests have been carried out for two radio environments (urban and suburban) in the Lisbon area, Portugal. The new enhancements produced a geopositioning success rate of 47% and 31%, and a median accuracy of 151 m and 337 m, for the urban and suburban environments, respectively. The implemented filter produced a 16% and 20% increase on accuracy, when compared with the geopositioned raw data. The obtained results are rather promising in accuracy and geolocation success rate. OTD positioning smoothed by moving average spatial filtering reveals a strong approach for positioning trace extracted events, vital for boosting Self-Organizing Networks (SON) over a 3G network.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Renewable energy sources (RES) have unique characteristics that grant them preference in energy and environmental policies. However, considering that the renewable resources are barely controllable and sometimes unpredictable, some challenges are faced when integrating high shares of renewable sources in power systems. In order to mitigate this problem, this paper presents a decision-making methodology regarding renewable investments. The model computes the optimal renewable generation mix from different available technologies (hydro, wind and photovoltaic) that integrates a given share of renewable sources, minimizing residual demand variability, therefore stabilizing the thermal power generation. The model also includes a spatial optimization of wind farms in order to identify the best distribution of wind capacity. This methodology is applied to the Portuguese power system.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A new data set of daily gridded observations of precipitation, computed from over 400 stations in Portugal, is used to assess the performance of 12 regional climate models at 25 km resolution, from the ENSEMBLES set, all forced by ERA-40 boundary conditions, for the 1961-2000 period. Standard point error statistics, calculated from grid point and basin aggregated data, and precipitation related climate indices are used to analyze the performance of the different models in representing the main spatial and temporal features of the regional climate, and its extreme events. As a whole, the ENSEMBLES models are found to achieve a good representation of those features, with good spatial correlations with observations. There is a small but relevant negative bias in precipitation, especially in the driest months, leading to systematic errors in related climate indices. The underprediction of precipitation occurs in most percentiles, although this deficiency is partially corrected at the basin level. Interestingly, some of the conclusions concerning the performance of the models are different of what has been found for the contiguous territory of Spain; in particular, ENSEMBLES models appear too dry over Portugal and too wet over Spain. Finally, models behave quite differently in the simulation of some important aspects of local climate, from the mean climatology to high precipitation regimes in localized mountain ranges and in the subsequent drier regions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Team sports represent complex systems: players interact continuously during a game, and exhibit intricate patterns of interaction, which can be identified and investigated at both individual and collective levels. We used Voronoi diagrams to identify and investigate the spatial dynamics of players' behavior in Futsal. Using this tool, we examined 19 plays of a sub-phase of a Futsal game played in a reduced area (20 m(2)) from which we extracted the trajectories of all players. Results obtained from a comparative analysis of player's Voronoi area (dominant region) and nearest teammate distance revealed different patterns of interaction between attackers and defenders, both at the level of individual players and teams. We found that, compared to defenders, larger dominant regions were associated with attackers. Furthermore, these regions were more variable in size among players from the same team but, at the player level, the attackers' dominant regions were more regular than those associated with each of the defenders. These findings support a formal description of the dynamic spatial interaction of the players, at least during the particular sub-phase of Futsal investigated. The adopted approach may be extended to other team behaviors where the actions taken at any instant in time by each of the involved agents are associated with the space they occupy at that particular time.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Porquê este projeto? Conhecimentos, aptidões, competências; desenvolvimento de competências (instrumentais, interpessoais, sistémicas); contributo para uma concepção de carácter multidimensional do estudante, preparando-o para apreender a importância da investigação, preparando-o para uma profissão de saúde; as novas soluções tecnológicas, a par da inovação e a mudança nos cuidados de saúde e dos processos de trabalho, constituem hoje os desafios do presente e do futuro para os profissionais de saúde [o desenvolvimento da tecnologia e da informática, a educação e a prática baseada na evidência (investigação), as mudanças nos processos de trabalho, o trabalho em equipa, a dimensão internacional]; a investigação e o seu impacto na profissão (desenvolvimento de conhecimentos próprios das profissões, autonomia, impacto positivo nas práticas profissionais, benefício do doente); a investigação auxilia no processo de definição dos parâmetros de uma profissão (nenhuma profissão terá um desenvolvimento sustentado sem o contributo da investigação; é através da investigação que se constitui um domínio de conhecimentos baseados na evidência que permitam as boas práticas); apostar num projeto associado à investigação, com uma base científica que contribua para uma melhor educação e prática profissional visando assegurar a credibilidade da profissão; aprendizagem centrada no estudante; adopção de metodologias de ensino que promovam a autonomia, o raciocínio, a capacidade crítica e a resolução de problemas.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background - Medical image perception research relies on visual data to study the diagnostic relationship between observers and medical images. A consistent method to assess visual function for participants in medical imaging research has not been developed and represents a significant gap in existing research. Methods - Three visual assessment factors appropriate to observer studies were identified: visual acuity, contrast sensitivity, and stereopsis. A test was designed for each, and 30 radiography observers (mean age 31.6 years) participated in each test. Results - Mean binocular visual acuity for distance was 20/14 for all observers. The difference between observers who did and did not use corrective lenses was not statistically significant (P = .12). All subjects had a normal value for near visual acuity and stereoacuity. Contrast sensitivity was better than population norms. Conclusion - All observers had normal visual function and could participate in medical imaging visual analysis studies. Protocols of evaluation and populations norms are provided. Further studies are necessary to understand fully the relationship between visual performance on tests and diagnostic accuracy in practice.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Dissertação para obtenção do grau de Mestre em Engenharia Civil na Área de Especialização em Vias de Comunicação e Transportes

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Present paper present the main results obtained in the scope of an ongoing project which aims to contribute to the valorization of a waste generated by the Portuguese oil company in construction materials. This waste is an aluminosilicate with high pozzolanic reactivity. Several different technological applications had already been tested with success both in terms of properties and compliance with the corresponding standards specifications. Namely, this project results already demonstrated that this waste can be used in traditional concrete, self-compacted concrete, mortars (renders, masonry mortar, concrete repair mortars), cement main constituent as well as alkali activated binders.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Hyperspectral remote sensing exploits the electromagnetic scattering patterns of the different materials at specific wavelengths [2, 3]. Hyperspectral sensors have been developed to sample the scattered portion of the electromagnetic spectrum extending from the visible region through the near-infrared and mid-infrared, in hundreds of narrow contiguous bands [4, 5]. The number and variety of potential civilian and military applications of hyperspectral remote sensing is enormous [6, 7]. Very often, the resolution cell corresponding to a single pixel in an image contains several substances (endmembers) [4]. In this situation, the scattered energy is a mixing of the endmember spectra. A challenging task underlying many hyperspectral imagery applications is then decomposing a mixed pixel into a collection of reflectance spectra, called endmember signatures, and the corresponding abundance fractions [8–10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. Linear mixing model holds approximately when the mixing scale is macroscopic [13] and there is negligible interaction among distinct endmembers [3, 14]. If, however, the mixing scale is microscopic (or intimate mixtures) [15, 16] and the incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [17], the linear model is no longer accurate. Linear spectral unmixing has been intensively researched in the last years [9, 10, 12, 18–21]. It considers that a mixed pixel is a linear combination of endmember signatures weighted by the correspondent abundance fractions. Under this model, and assuming that the number of substances and their reflectance spectra are known, hyperspectral unmixing is a linear problem for which many solutions have been proposed (e.g., maximum likelihood estimation [8], spectral signature matching [22], spectral angle mapper [23], subspace projection methods [24,25], and constrained least squares [26]). In most cases, the number of substances and their reflectances are not known and, then, hyperspectral unmixing falls into the class of blind source separation problems [27]. Independent component analysis (ICA) has recently been proposed as a tool to blindly unmix hyperspectral data [28–31]. ICA is based on the assumption of mutually independent sources (abundance fractions), which is not the case of hyperspectral data, since the sum of abundance fractions is constant, implying statistical dependence among them. This dependence compromises ICA applicability to hyperspectral images as shown in Refs. [21, 32]. In fact, ICA finds the endmember signatures by multiplying the spectral vectors with an unmixing matrix, which minimizes the mutual information among sources. If sources are independent, ICA provides the correct unmixing, since the minimum of the mutual information is obtained only when sources are independent. This is no longer true for dependent abundance fractions. Nevertheless, some endmembers may be approximately unmixed. These aspects are addressed in Ref. [33]. Under the linear mixing model, the observations from a scene are in a simplex whose vertices correspond to the endmembers. Several approaches [34–36] have exploited this geometric feature of hyperspectral mixtures [35]. Minimum volume transform (MVT) algorithm [36] determines the simplex of minimum volume containing the data. The method presented in Ref. [37] is also of MVT type but, by introducing the notion of bundles, it takes into account the endmember variability usually present in hyperspectral mixtures. The MVT type approaches are complex from the computational point of view. Usually, these algorithms find in the first place the convex hull defined by the observed data and then fit a minimum volume simplex to it. For example, the gift wrapping algorithm [38] computes the convex hull of n data points in a d-dimensional space with a computational complexity of O(nbd=2cþ1), where bxc is the highest integer lower or equal than x and n is the number of samples. The complexity of the method presented in Ref. [37] is even higher, since the temperature of the simulated annealing algorithm used shall follow a log( ) law [39] to assure convergence (in probability) to the desired solution. Aiming at a lower computational complexity, some algorithms such as the pixel purity index (PPI) [35] and the N-FINDR [40] still find the minimum volume simplex containing the data cloud, but they assume the presence of at least one pure pixel of each endmember in the data. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. PPI algorithm uses the minimum noise fraction (MNF) [41] as a preprocessing step to reduce dimensionality and to improve the signal-to-noise ratio (SNR). The algorithm then projects every spectral vector onto skewers (large number of random vectors) [35, 42,43]. The points corresponding to extremes, for each skewer direction, are stored. A cumulative account records the number of times each pixel (i.e., a given spectral vector) is found to be an extreme. The pixels with the highest scores are the purest ones. N-FINDR algorithm [40] is based on the fact that in p spectral dimensions, the p-volume defined by a simplex formed by the purest pixels is larger than any other volume defined by any other combination of pixels. This algorithm finds the set of pixels defining the largest volume by inflating a simplex inside the data. ORA SIS [44, 45] is a hyperspectral framework developed by the U.S. Naval Research Laboratory consisting of several algorithms organized in six modules: exemplar selector, adaptative learner, demixer, knowledge base or spectral library, and spatial postrocessor. The first step consists in flat-fielding the spectra. Next, the exemplar selection module is used to select spectral vectors that best represent the smaller convex cone containing the data. The other pixels are rejected when the spectral angle distance (SAD) is less than a given thresh old. The procedure finds the basis for a subspace of a lower dimension using a modified Gram–Schmidt orthogonalizati on. The selected vectors are then projected onto this subspace and a simplex is found by an MV T pro cess. ORA SIS is oriented to real-time target detection from uncrewed air vehicles using hyperspectral data [46]. In this chapter we develop a new algorithm to unmix linear mixtures of endmember spectra. First, the algorithm determines the number of endmembers and the signal subspace using a newly developed concept [47, 48]. Second, the algorithm extracts the most pure pixels present in the data. Unlike other methods, this algorithm is completely automatic and unsupervised. To estimate the number of endmembers and the signal subspace in hyperspectral linear mixtures, the proposed scheme begins by estimating sign al and noise correlation matrices. The latter is based on multiple regression theory. The signal subspace is then identified by selectin g the set of signal eigenvalue s that best represents the data, in the least-square sense [48,49 ], we note, however, that VCA works with projected and with unprojected data. The extraction of the end members exploits two facts: (1) the endmembers are the vertices of a simplex and (2) the affine transformation of a simplex is also a simplex. As PPI and N-FIND R algorithms, VCA also assumes the presence of pure pixels in the data. The algorithm iteratively projects data on to a direction orthogonal to the subspace spanned by the endmembers already determined. The new end member signature corresponds to the extreme of the projection. The algorithm iterates until all end members are exhausted. VCA performs much better than PPI and better than or comparable to N-FI NDR; yet it has a computational complexity between on e and two orders of magnitude lower than N-FINDR. The chapter is structure d as follows. Section 19.2 describes the fundamentals of the proposed method. Section 19.3 and Section 19.4 evaluate the proposed algorithm using simulated and real data, respectively. Section 19.5 presents some concluding remarks.