955 resultados para Urban Spatial Transformation
Resumo:
Hepatitis B markers were determined in 397 individuals from Niterói and 680 from Nova Iguaçu and prevalences of 9.1% (1.0% of HBsAg and 8.1% of anti HBs) and 11.1% (1.8% of HBsAg and 9.3% of antiHBs) were found, respectively. The comparative prevalence of both markers in relation to age showed a higher prevalence of HBsAg in the group 21-50 years old. Considering the antiHBs antibody, it was demostrated a gradual increase with age, reaching 14.9% in Niterói and 29.1% in Nova Iguaçu in individuals over 51 years old. For hepatitis A, in 259 samples from Niterói, equally distributed by age groups, an overall prevalence of 74.5% of anti-HAV antibodies was found. This prevalence increases gradually reaching 90.0% at age over thirty. In 254 samples from Nova Iguaçu analysed, a prevalence of 90.5% of antibodies was encountered when the same criteria of distribution of samples were used. This level of prevalence reached 90.0% already in the age over ten years old. The tests were performed by enzyme immunoassay with reagents prepared in our laboratory.
Resumo:
A thesis submitted in partial satisfaction of the requirements for the degree of Doctor of Philosophy in Information Systems
Resumo:
This paper describes the methodology adopted to assess local air quality impact in the vicinity of a coal power plant located in the south of Portugal. Two sampling areas were selected to assess the deposition flux of dust fallout and its potential spatial heterogeneity. The sampling area was divided into two subareas: the inner, with higher sampling density and urban and suburban characteristics, inside a 6-km circle centered on the stacks, and an outer subarea, mainly rural, with lower sampling density within a radius of 20 km. Particulate matter deposition was studied in the vicinity of the coal fired power plant during three seasonal sampling campaigns. For the first one, the average annual flux of dust fallout was 22.51 g/(m2 yr), ranging from 4.20 to 65.94 g/(m2 yr); for the second one was 9.47 g/(m2 yr), ranging from 0.78 to 32.72 g/(m2 yr) and for the last one was 38.42 g/(m2 yr), ranging from 1.41 to 117.48 g/(m2 yr). The fallout during the second campaign turned out to be much lower than for others. This was in part due to meteorological local patterns but mostly due to the fact that the power plant was not working at full power during the second sampling campaign.155
Resumo:
The aim of this work was to assess the influence of meteorological conditions on the dispersion of particulate matter from an industrial zone into urban and suburban areas. The particulate matter concentration was related to the most important meteorological variables such as wind direction, velocity and frequency. A coal-fired power plant was considered to be the main emission source with two stacks of 225 m height. A middle point between the two stacks was taken as the centre of two concentric circles with 6 and 20 km radius delimiting the sampling area. About 40 sampling collectors were placed within this area. Meteorological data was obtained from a portable meteorological station placed at approximately 1.7 km to SE from the stacks. Additional data was obtained from the electrical company that runs the coal power plant. These data covers the years from 2006 to the present. A detailed statistical analysis was performed to identify the most frequent meteorological conditions concerning mainly wind speed and direction. This analysis revealed that the most frequent wind blows from Northwest and North and the strongest winds blow from Northwest. Particulate matter deposition was obtained in two sampling campaigns carried out in summer and in spring. For the first campaign the monthly average flux deposition was 1.90 g/m2 and for the second campaign this value was 0.79 g/m2. Wind dispersion occurred predominantly from North to South, away from the nearest residential area, located at about 6 km to Northwest from the stacks. Nevertheless, the higher deposition fluxes occurred in the NW/N and NE/E quadrants. This study was conducted considering only the contribution of particulate matter from coal combustion, however, others sources may be present as well, such as road traffic. Additional chemical analyses and microanalysis are needed to identify the source linkage to flux deposition levels.
Resumo:
Algebra Colloquium, 15 (2008), p. 581–588
Resumo:
The Tagus estuary is bordered by the largest metropolitan area in Portugal, the Lisbon capital city council. It has suffered the impact of several major tsunamis in the past, as shown by a recent revision of the catalogue of tsunamis that struck the Portuguese coast over the past two millennia. Hence, the exposure of populations and infrastructure established along the riverfront comprises a critical concern for the civil protection services. The main objectives of this work are to determine critical inundation areas in Lisbon and to quantify the associated severity through a simple index derived from the local maximum of momentum flux per unit mass and width. The employed methodology is based on the mathematical modelling of a tsunami propagating along the estuary, resembling the one occurred on the 1 November of 1755 that followed the 8.5 M-w Great Lisbon Earthquake. The employed simulation tool was STAV-2D, a shallow-flow solver coupled with conservation equations for fine solid phases, and now featuring the novelty of discrete Lagrangian tracking of large debris. Different sets of initial conditions were studied, combining distinct tidal, atmospheric and fluvial scenarios, so that the civil protection services were provided with comprehensive information to devise public warning and alert systems and post-event mitigation intervention. For the most severe scenario, the obtained results have shown a maximum inundation extent of 1.29 km at the AlcA cent ntara valley and water depths reaching nearly 10 m across Lisbon's riverfront.
Resumo:
Due to their detrimental effects on human health, the scientific interest in ultrafine particles (UFP) has been increasing, but available information is far from comprehensive. Compared to the remaining population, the elderly are potentially highly susceptible to the effects of outdoor air pollution. Thus, this study aimed to (1) determine the levels of outdoor pollutants in an urban area with emphasis on UFP concentrations and (2) estimate the respective dose rates of exposure for elderly populations. UFP were continuously measured over 3 weeks at 3 sites in north Portugal: 2 urban (U1 and U2) and 1 rural used as reference (R1). Meteorological parameters and outdoor pollutants including particulate matter (PM10), ozone (O3), nitric oxide (NO), and nitrogen dioxide (NO2) were also measured. The dose rates of inhalation exposure to UFP were estimated for three different elderly age categories: 64–70, 71–80, and >81 years. Over the sampling period levels of PM10, O3 and NO2 were in compliance with European legislation. Mean UFP were 1.7 × 104 and 1.2 × 104 particles/cm3 at U1 and U2, respectively, whereas at rural site levels were 20–70% lower (mean of 1 ×104 particles/cm3). Vehicular traffic and local emissions were the predominant identified sources of UFP at urban sites. In addition, results of correlation analysis showed that UFP were meteorologically dependent. Exposure dose rates were 1.2- to 1.4-fold higher at urban than reference sites with the highest levels noted for adults at 71–80 yr, attributed mainly to higher inhalation rates.
Resumo:
Esta reflexão visa contextualizar o processo criativo da componente prática do Trabalho de Projecto – Objecto conferente do grau de Mestre em Teatro, especialização em Artes Performativas – Interpretação, e que consiste no solo Como polir uma montanha que teve lugar no Lavadouro Público de Carnide com Acolhimento do Teatro do Silêncio em Maio de 2014. O objectivo foi criar um espectáculo com base em acções de limpeza, utilizando os gestos quotidianos como elemento catalisador do processo criativo. Procurei, desta forma, espelhar algumas rotinas do dia-a-dia no campo abstracto das artes performativas, desconstruindo o espaço privado e servindo-me do corpo como ferramenta primordial na comunicação destas acções através do movimento. A premissa para este solo partiu da limpeza enquanto acção passível de gerar transformação no espaço e no tempo, e também no corpo e na mente. Inicialmente pensei que a limpeza era fundamental para a organização destes elementos, contudo o resultado levou-me a compreender que limpeza e organização podem ter pressupostos muito distintos. Fundamentei a minha pesquisa observando acções quotidianas de limpeza em contextos diversificados como a vivência rural, a vivência urbana e ao nível da memória – a minha e a de outros indivíduos – de como processamos determinados comportamentos observados desde a infância. Durante o processo de criação de Como polir uma montanha, surgiram diversas inquietações que me levaram a reflectir também sobre a problemática de ser criadora e intérprete a solo, e de como esta questão é transversal à noção de si mesmo perante o outro. Esta noção leva-me a questionar o limiar que separa, mas também une, o palco e a plateia. Por isso integrei o público na acção cénica, através de elementos que propunham uma observação participativa do espectáculo.
Resumo:
The concerns on metals in urban wastewater treatment plants (WWTPs) are mainly related to its contents in discharges to environment, namely in the final effluent and in the sludge produced. In the near future, more restrictive limits will be imposed to final effluents, due to the recent guidelines of the European Water Framework Directive (EUWFD). Concerning the sludge, at least seven metals (Cd, Cr, Cu, Hg, Ni, Pb and Zn) have been regulated in different countries, four of which were classified by EUWFD as priority substances and two of which were also classified as hazardous substances. Although WWTPs are not designed to remove metals, the study of metals behaviour in these systems is a crucial issue to develop predictive models that can help more effectively the regulation of pre-treatment requirements and contribute to optimize the systems to get more acceptable metal concentrations in its discharges. Relevant data have been published in the literature in recent decades concerning the occurrence/fate/behaviour of metals in WWTPs. However, the information is dispersed and not standardized in terms of parameters for comparing results. This work provides a critical review on this issue through a careful systematization, in tables and graphs, of the results reported in the literature, which allows its comparison and so its analysis, in order to conclude about the state of the art in this field. A summary of the main consensus, divergences and constraints found, as well as some recommendations, is presented as conclusions, aiming to contribute to a more concerted action of future research. © 2015, Islamic Azad University (IAU).
Resumo:
To obtain base line data on incidence, duration, clinical characteristics and etiology of acute respiratory infections (ARI), 276 children from deprived families living in Montevideo were followed during 32 months. The target population was divided into two groups for the analysis of the results: children aged less than 12 months and those older than this age. During the follow-up period 1.056 ARI episodes were recorded. ARI incidence was 5.2 per child/year. It was 87% higher in infants than in the older group, as was the duration of the episodes. Most of the diseases were mild. Tachypnea and retractions were seldom observed, but 12 children were refered to the hospital, and 2 infants died. Viral etiology was identified in 15.3% of the episodes. RSV was the predominant agent producing annual outbreaks. Moderate to heavy colonization of the upper respiratory tract by Streptococcus pneumoniae (32.3%) and Hemophilus sp. (18.9%) was recorded during ARI episodes. This community-based study furnish original data on ARI in Uruguay. It enabled to asses the impact of these infections on childhood.
Resumo:
Treatment with dexamethasone (DMS) in the early phases of the experimental Schistosoma mansoni infection causes an indirect effect on the cercaria-schistosomulum transformation process. This is observed when naive albino mice are treated with that drug (50 mg/Kg, subcutaneously) and infected intraperitonealy 01 hour later with about 500 S. mansoni cercariae (LE strain). An inhibition in the host cell adhesion to the larvae, with a simultaneous delay in the cercaria-schistosomulum transformation, is observed. This effect is probably due to a blockade of the neutrophil migration to the peritoneal cavity of mice, by an impairment of the release of chemotactic substances. Such delay probably favors the killing of S. mansoni larvae, still in the transformation process, by the vertebrate host defenses, as the complement system.
Resumo:
The prevalence of rubella antibodies was evaluated through a ramdom Seroepidemiological survey in 1400 blood samples of 2-14 year old children and in 329 samples of umbilical cord serum. Rubella IgG antibodies were detected by ELISA, and the sera were collected in 1987, five years before the mass vaccination campaign with measles-mumps-rubella vaccine carried out in the city of São Paulo in 1992. A significant increase in prevalence of rubella infection was observed after 6 years of age, and 77% of the individuals aged from 15 to 19 years had detectable rubella antibodies. However, the seroprevalence rose to 90.5% (171/189) in cord serum samples from children whose mothers were 20 to 29 years old, and reached 95.6% in newborns of mothers who were 30 to 34 years old, indicating that a large number of women are infected during childbearing years. This study confirms that rubella infection represents an important Public Health problem in São Paulo city. The data on the seroprevalence of rubella antibodies before the mass vaccination campaign reflects the baseline immunological status of this population before any intervention and should be used to design an adequate vaccination strategy and to assess the Seroepidemiological impact of this intervention.
Resumo:
Trabalho de projecto apresentada como requisito parcial para obtenção do grau de Mestre em Ciência e Sistemas de Informação Geográfica.
Resumo:
Dissertation submitted to obtain a Ph.D. (Doutoramento) degree in Biology at the Instituto de Tecnologia Química e Biológica da Universidade Nova de Lisboa
Resumo:
Hyperspectral remote sensing exploits the electromagnetic scattering patterns of the different materials at specific wavelengths [2, 3]. Hyperspectral sensors have been developed to sample the scattered portion of the electromagnetic spectrum extending from the visible region through the near-infrared and mid-infrared, in hundreds of narrow contiguous bands [4, 5]. The number and variety of potential civilian and military applications of hyperspectral remote sensing is enormous [6, 7]. Very often, the resolution cell corresponding to a single pixel in an image contains several substances (endmembers) [4]. In this situation, the scattered energy is a mixing of the endmember spectra. A challenging task underlying many hyperspectral imagery applications is then decomposing a mixed pixel into a collection of reflectance spectra, called endmember signatures, and the corresponding abundance fractions [8–10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. Linear mixing model holds approximately when the mixing scale is macroscopic [13] and there is negligible interaction among distinct endmembers [3, 14]. If, however, the mixing scale is microscopic (or intimate mixtures) [15, 16] and the incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [17], the linear model is no longer accurate. Linear spectral unmixing has been intensively researched in the last years [9, 10, 12, 18–21]. It considers that a mixed pixel is a linear combination of endmember signatures weighted by the correspondent abundance fractions. Under this model, and assuming that the number of substances and their reflectance spectra are known, hyperspectral unmixing is a linear problem for which many solutions have been proposed (e.g., maximum likelihood estimation [8], spectral signature matching [22], spectral angle mapper [23], subspace projection methods [24,25], and constrained least squares [26]). In most cases, the number of substances and their reflectances are not known and, then, hyperspectral unmixing falls into the class of blind source separation problems [27]. Independent component analysis (ICA) has recently been proposed as a tool to blindly unmix hyperspectral data [28–31]. ICA is based on the assumption of mutually independent sources (abundance fractions), which is not the case of hyperspectral data, since the sum of abundance fractions is constant, implying statistical dependence among them. This dependence compromises ICA applicability to hyperspectral images as shown in Refs. [21, 32]. In fact, ICA finds the endmember signatures by multiplying the spectral vectors with an unmixing matrix, which minimizes the mutual information among sources. If sources are independent, ICA provides the correct unmixing, since the minimum of the mutual information is obtained only when sources are independent. This is no longer true for dependent abundance fractions. Nevertheless, some endmembers may be approximately unmixed. These aspects are addressed in Ref. [33]. Under the linear mixing model, the observations from a scene are in a simplex whose vertices correspond to the endmembers. Several approaches [34–36] have exploited this geometric feature of hyperspectral mixtures [35]. Minimum volume transform (MVT) algorithm [36] determines the simplex of minimum volume containing the data. The method presented in Ref. [37] is also of MVT type but, by introducing the notion of bundles, it takes into account the endmember variability usually present in hyperspectral mixtures. The MVT type approaches are complex from the computational point of view. Usually, these algorithms find in the first place the convex hull defined by the observed data and then fit a minimum volume simplex to it. For example, the gift wrapping algorithm [38] computes the convex hull of n data points in a d-dimensional space with a computational complexity of O(nbd=2cþ1), where bxc is the highest integer lower or equal than x and n is the number of samples. The complexity of the method presented in Ref. [37] is even higher, since the temperature of the simulated annealing algorithm used shall follow a log( ) law [39] to assure convergence (in probability) to the desired solution. Aiming at a lower computational complexity, some algorithms such as the pixel purity index (PPI) [35] and the N-FINDR [40] still find the minimum volume simplex containing the data cloud, but they assume the presence of at least one pure pixel of each endmember in the data. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. PPI algorithm uses the minimum noise fraction (MNF) [41] as a preprocessing step to reduce dimensionality and to improve the signal-to-noise ratio (SNR). The algorithm then projects every spectral vector onto skewers (large number of random vectors) [35, 42,43]. The points corresponding to extremes, for each skewer direction, are stored. A cumulative account records the number of times each pixel (i.e., a given spectral vector) is found to be an extreme. The pixels with the highest scores are the purest ones. N-FINDR algorithm [40] is based on the fact that in p spectral dimensions, the p-volume defined by a simplex formed by the purest pixels is larger than any other volume defined by any other combination of pixels. This algorithm finds the set of pixels defining the largest volume by inflating a simplex inside the data. ORA SIS [44, 45] is a hyperspectral framework developed by the U.S. Naval Research Laboratory consisting of several algorithms organized in six modules: exemplar selector, adaptative learner, demixer, knowledge base or spectral library, and spatial postrocessor. The first step consists in flat-fielding the spectra. Next, the exemplar selection module is used to select spectral vectors that best represent the smaller convex cone containing the data. The other pixels are rejected when the spectral angle distance (SAD) is less than a given thresh old. The procedure finds the basis for a subspace of a lower dimension using a modified Gram–Schmidt orthogonalizati on. The selected vectors are then projected onto this subspace and a simplex is found by an MV T pro cess. ORA SIS is oriented to real-time target detection from uncrewed air vehicles using hyperspectral data [46]. In this chapter we develop a new algorithm to unmix linear mixtures of endmember spectra. First, the algorithm determines the number of endmembers and the signal subspace using a newly developed concept [47, 48]. Second, the algorithm extracts the most pure pixels present in the data. Unlike other methods, this algorithm is completely automatic and unsupervised. To estimate the number of endmembers and the signal subspace in hyperspectral linear mixtures, the proposed scheme begins by estimating sign al and noise correlation matrices. The latter is based on multiple regression theory. The signal subspace is then identified by selectin g the set of signal eigenvalue s that best represents the data, in the least-square sense [48,49 ], we note, however, that VCA works with projected and with unprojected data. The extraction of the end members exploits two facts: (1) the endmembers are the vertices of a simplex and (2) the affine transformation of a simplex is also a simplex. As PPI and N-FIND R algorithms, VCA also assumes the presence of pure pixels in the data. The algorithm iteratively projects data on to a direction orthogonal to the subspace spanned by the endmembers already determined. The new end member signature corresponds to the extreme of the projection. The algorithm iterates until all end members are exhausted. VCA performs much better than PPI and better than or comparable to N-FI NDR; yet it has a computational complexity between on e and two orders of magnitude lower than N-FINDR. The chapter is structure d as follows. Section 19.2 describes the fundamentals of the proposed method. Section 19.3 and Section 19.4 evaluate the proposed algorithm using simulated and real data, respectively. Section 19.5 presents some concluding remarks.