11 resultados para Islamic law--Turkey--Sources
em Repositório Científico do Instituto Politécnico de Lisboa - Portugal
Resumo:
This paper is an elaboration of the DECA algorithm [1] to blindly unmix hyperspectral data. The underlying mixing model is linear, meaning that each pixel is a linear mixture of the endmembers signatures weighted by the correspondent abundance fractions. The proposed method, as DECA, is tailored to highly mixed mixtures in which the geometric based approaches fail to identify the simplex of minimum volume enclosing the observed spectral vectors. We resort then to a statitistical framework, where the abundance fractions are modeled as mixtures of Dirichlet densities, thus enforcing the constraints on abundance fractions imposed by the acquisition process, namely non-negativity and constant sum. With respect to DECA, we introduce two improvements: 1) the number of Dirichlet modes are inferred based on the minimum description length (MDL) principle; 2) The generalized expectation maximization (GEM) algorithm we adopt to infer the model parameters is improved by using alternating minimization and augmented Lagrangian methods to compute the mixing matrix. The effectiveness of the proposed algorithm is illustrated with simulated and read data.
Resumo:
Mestrado em Radiações Aplicadas às Tecnologias da Saúde - Área de especialização: Proteção Contra as Radiações.
Resumo:
Although the adverse health consequences of ingestion of food contaminated with aflatoxin B1 (AFB1) are known, relatively few studies are available on the adverse effects of exposure in occupational settings. Taking this into consideration, our study was developed aiming to elucidate the possible effects of occupational exposure to AFB1 in Portuguese swine production facilities using a specific biomarker to assess exposure to AFB1. In total, 28 workers participated in this study, providing blood samples, and a control group (n = 30) was composed of subjects without any type of agricultural activity. Fungal contamination was also studied by conventional methods through air, surfaces, and new and used floor coverage. Twenty-one workers (75%) showed detectable levels of AFB1 with values ranging from <1 ng/ml to 8.94 ng/ml and with a mean value of 1.91 ± 1.68 ng/ml. In the control group, the AFB1 values were all below 1 ng/ml. Twelve different Aspergillus species were identified. Aspergillus versicolor presented the highest airborne spore counts (3210 CFU/m3) and was also detected in higher values in surfaces (>300 CFU/cm2). Data indicate that exposure to AFB1 occurs in swine barns, and this site serves as a contamination source in an occupational setting.
Resumo:
A presente investigação procurou descrever, de forma exaustiva, o processo de previsão, negociação, implementação e avaliação do Contrato de Execução celebrado entre a Câmara Municipal de Sintra e o Ministério da Educação em 2009. Este contrato corresponde a um instrumento previsto na regulamentação do quadro de transferências de competências para os municípios em matéria de educação, de acordo com o regime previsto no Decreto-Lei n.º 144/2008, de 28 de julho. Definida a problemática e os objetivos, a investigação centrou-se num estudo de caso no qual foi feita a descrição e interpretação do processo e das ações desenvolvidas pelos intervenientes no período compreendido entre 2008 e 2011. Recorreu-se à confrontação dos dados obtidos através da análise das fontes documentais e do recurso às entrevistas realizadas aos responsáveis pelo Pelouro da Educação e diretores dos Agrupamentos de Escolas, à luz da revisão da literatura e do contributo de diferentes trabalhos de investigadores nesta matéria. A investigação permitiu concluir que o processo de contratualização foi algo complexo face à realidade deste Município e que o normativo apresenta várias lacunas no que diz respeito à contratualização da referida transferência de competências, designadamente porque procura generalizar algo que não é, de todo, generalizável – o campo da educação face à complexidade dos territórios educativos em causa e aos dos intervenientes envolvidos no mesmo.
Resumo:
Clinical and environmental samples from Portugal were screened for the presence of Aspergillus and the distributions of the species complexes were determined in order to understand how their distributions differ based on their source. Fifty-seven Aspergillus isolates from clinical samples were collected from 10 health institutions. Six species complexes were detected by internal transcribed spacer sequencing; Fumigati, Flavi, and Nigri were found most frequently (50.9%, 21.0%, and 15.8%, respectively). β-tubulin and calmodulin sequencing resulted in seven cryptic species (A. awamorii, A. brasiliensis, A. fructus, A. lentulus, A. sydowii, A. tubingensis, Emericella echinulata) being identified among the 57 isolates. Thirty-nine isolates of Aspergillus were recovered from beach sand and poultry farms, 31 from swine farms, and 80 from hospital environments, for a total 189 isolates. Eleven species complexes were found in these 189 isolates, and those belonging to the Versicolores species complex were found most frequently (23.8%). There was a significant association between the different environmental sources and distribution of the species complexes; the hospital environment had greater variability of species complexes than other environmental locations. A high prevalence of cryptic species within the Circumdati complex was detected in several environments; from the isolates analyzed, at least four cryptic species were identified, most of them growing at 37ºC. Because Aspergillus species complexes have different susceptibilities to antifungals, knowing the species-complex epidemiology for each setting, as well as the identification of cryptic species among the collected clinical isolates, is important. This may allow preventive and corrective measures to be taken, which may result in decreased exposure to those organisms and a better prognosis.
Resumo:
This paper intends to evaluate the capacity of producing concrete with a pre-established performance (in terms of mechanical strength) incorporating recycled concrete aggregates (RCA) from different sources. To this purpose, rejected products from the precasting industry and concrete produced in laboratory were used. The appraisal of the self-replication capacity was made for three strength ranges: 15-25 MPa, 35-45 MPa and 65-75 MPa. The mixes produced tried to replicate the strength of the source concrete (SC) of the RA. Only total, (100%) replacement of coarse natural aggregates (CNA) by coarse recycled concrete aggregates (CRCA) was tested. The results show that, both in mechanical and durability terms, there were no significant differences between aggregates from controlled sources and those from precast rejects for the highest levels of the target strength. Furthermore, the performance losses resulting from the RA's incorporation are substantially reduced when used medium or high strength SC's. (C) 2014 Elsevier Ltd. All rights reserved.
Resumo:
The erosion depth profile of planar targets in balanced and unbalanced magnetron cathodes with cylindrical symmetry is measured along the target radius. The magnetic fields have rotational symmetry. The horizontal and vertical components of the magnetic field B are measured at points above the cathode target with z = 2 x 10(-3) m. The experimental data reveal that the target erosion depth profile is a function of the angle. made by B with a horizontal line defined by z = 2 x 10(-3) m. To explain this dependence a simplified model of the discharge is developed. In the scope of the model, the pathway lengths of the secondary electrons in the pre-sheath region are calculated by analytical integration of the Lorentz differential equations. Weighting these lengths by using the distribution law of the mean free path of the secondary electrons, we estimate the densities of the ionizing events over the cathode and the relative flux of the sputtered atoms. The expression so deduced correlates for the first time the erosion depth profile of the target with the angle theta. The model shows reasonably good fittings to the experimental target erosion depth profiles confirming that ionization occurs mainly in the pre-sheath zone.
Resumo:
The study of transient dynamical phenomena near bifurcation thresholds has attracted the interest of many researchers due to the relevance of bifurcations in different physical or biological systems. In the context of saddle-node bifurcations, where two or more fixed points collide annihilating each other, it is known that the dynamics can suffer the so-called delayed transition. This phenomenon emerges when the system spends a lot of time before reaching the remaining stable equilibrium, found after the bifurcation, because of the presence of a saddle-remnant in phase space. Some works have analytically tackled this phenomenon, especially in time-continuous dynamical systems, showing that the time delay, tau, scales according to an inverse square-root power law, tau similar to (mu-mu (c) )(-1/2), as the bifurcation parameter mu, is driven further away from its critical value, mu (c) . In this work, we first characterize analytically this scaling law using complex variable techniques for a family of one-dimensional maps, called the normal form for the saddle-node bifurcation. We then apply our general analytic results to a single-species ecological model with harvesting given by a unimodal map, characterizing the delayed transition and the scaling law arising due to the constant of harvesting. For both analyzed systems, we show that the numerical results are in perfect agreement with the analytical solutions we are providing. The procedure presented in this work can be used to characterize the scaling laws of one-dimensional discrete dynamical systems with saddle-node bifurcations.
Resumo:
In this article, we present the first study on probabilistic tsunami hazard assessment for the Northeast (NE) Atlantic region related to earthquake sources. The methodology combines the probabilistic seismic hazard assessment, tsunami numerical modeling, and statistical approaches. We consider three main tsunamigenic areas, namely the Southwest Iberian Margin, the Gloria, and the Caribbean. For each tsunamigenic zone, we derive the annual recurrence rate for each magnitude range, from Mw 8.0 up to Mw 9.0, with a regular interval, using the Bayesian method, which incorporates seismic information from historical and instrumental catalogs. A numerical code, solving the shallow water equations, is employed to simulate the tsunami propagation and compute near shore wave heights. The probability of exceeding a specific tsunami hazard level during a given time period is calculated using the Poisson distribution. The results are presented in terms of the probability of exceedance of a given tsunami amplitude for 100- and 500-year return periods. The hazard level varies along the NE Atlantic coast, being maximum along the northern segment of the Morocco Atlantic coast, the southern Portuguese coast, and the Spanish coast of the Gulf of Cadiz. We find that the probability that a maximum wave height exceeds 1 m somewhere in the NE Atlantic region reaches 60 and 100 % for 100- and 500-year return periods, respectively. These probability values decrease, respectively, to about 15 and 50 % when considering the exceedance threshold of 5 m for the same return periods of 100 and 500 years.
Resumo:
The concerns on metals in urban wastewater treatment plants (WWTPs) are mainly related to its contents in discharges to environment, namely in the final effluent and in the sludge produced. In the near future, more restrictive limits will be imposed to final effluents, due to the recent guidelines of the European Water Framework Directive (EUWFD). Concerning the sludge, at least seven metals (Cd, Cr, Cu, Hg, Ni, Pb and Zn) have been regulated in different countries, four of which were classified by EUWFD as priority substances and two of which were also classified as hazardous substances. Although WWTPs are not designed to remove metals, the study of metals behaviour in these systems is a crucial issue to develop predictive models that can help more effectively the regulation of pre-treatment requirements and contribute to optimize the systems to get more acceptable metal concentrations in its discharges. Relevant data have been published in the literature in recent decades concerning the occurrence/fate/behaviour of metals in WWTPs. However, the information is dispersed and not standardized in terms of parameters for comparing results. This work provides a critical review on this issue through a careful systematization, in tables and graphs, of the results reported in the literature, which allows its comparison and so its analysis, in order to conclude about the state of the art in this field. A summary of the main consensus, divergences and constraints found, as well as some recommendations, is presented as conclusions, aiming to contribute to a more concerted action of future research. © 2015, Islamic Azad University (IAU).
Resumo:
Hyperspectral remote sensing exploits the electromagnetic scattering patterns of the different materials at specific wavelengths [2, 3]. Hyperspectral sensors have been developed to sample the scattered portion of the electromagnetic spectrum extending from the visible region through the near-infrared and mid-infrared, in hundreds of narrow contiguous bands [4, 5]. The number and variety of potential civilian and military applications of hyperspectral remote sensing is enormous [6, 7]. Very often, the resolution cell corresponding to a single pixel in an image contains several substances (endmembers) [4]. In this situation, the scattered energy is a mixing of the endmember spectra. A challenging task underlying many hyperspectral imagery applications is then decomposing a mixed pixel into a collection of reflectance spectra, called endmember signatures, and the corresponding abundance fractions [8–10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. Linear mixing model holds approximately when the mixing scale is macroscopic [13] and there is negligible interaction among distinct endmembers [3, 14]. If, however, the mixing scale is microscopic (or intimate mixtures) [15, 16] and the incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [17], the linear model is no longer accurate. Linear spectral unmixing has been intensively researched in the last years [9, 10, 12, 18–21]. It considers that a mixed pixel is a linear combination of endmember signatures weighted by the correspondent abundance fractions. Under this model, and assuming that the number of substances and their reflectance spectra are known, hyperspectral unmixing is a linear problem for which many solutions have been proposed (e.g., maximum likelihood estimation [8], spectral signature matching [22], spectral angle mapper [23], subspace projection methods [24,25], and constrained least squares [26]). In most cases, the number of substances and their reflectances are not known and, then, hyperspectral unmixing falls into the class of blind source separation problems [27]. Independent component analysis (ICA) has recently been proposed as a tool to blindly unmix hyperspectral data [28–31]. ICA is based on the assumption of mutually independent sources (abundance fractions), which is not the case of hyperspectral data, since the sum of abundance fractions is constant, implying statistical dependence among them. This dependence compromises ICA applicability to hyperspectral images as shown in Refs. [21, 32]. In fact, ICA finds the endmember signatures by multiplying the spectral vectors with an unmixing matrix, which minimizes the mutual information among sources. If sources are independent, ICA provides the correct unmixing, since the minimum of the mutual information is obtained only when sources are independent. This is no longer true for dependent abundance fractions. Nevertheless, some endmembers may be approximately unmixed. These aspects are addressed in Ref. [33]. Under the linear mixing model, the observations from a scene are in a simplex whose vertices correspond to the endmembers. Several approaches [34–36] have exploited this geometric feature of hyperspectral mixtures [35]. Minimum volume transform (MVT) algorithm [36] determines the simplex of minimum volume containing the data. The method presented in Ref. [37] is also of MVT type but, by introducing the notion of bundles, it takes into account the endmember variability usually present in hyperspectral mixtures. The MVT type approaches are complex from the computational point of view. Usually, these algorithms find in the first place the convex hull defined by the observed data and then fit a minimum volume simplex to it. For example, the gift wrapping algorithm [38] computes the convex hull of n data points in a d-dimensional space with a computational complexity of O(nbd=2cþ1), where bxc is the highest integer lower or equal than x and n is the number of samples. The complexity of the method presented in Ref. [37] is even higher, since the temperature of the simulated annealing algorithm used shall follow a log( ) law [39] to assure convergence (in probability) to the desired solution. Aiming at a lower computational complexity, some algorithms such as the pixel purity index (PPI) [35] and the N-FINDR [40] still find the minimum volume simplex containing the data cloud, but they assume the presence of at least one pure pixel of each endmember in the data. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. PPI algorithm uses the minimum noise fraction (MNF) [41] as a preprocessing step to reduce dimensionality and to improve the signal-to-noise ratio (SNR). The algorithm then projects every spectral vector onto skewers (large number of random vectors) [35, 42,43]. The points corresponding to extremes, for each skewer direction, are stored. A cumulative account records the number of times each pixel (i.e., a given spectral vector) is found to be an extreme. The pixels with the highest scores are the purest ones. N-FINDR algorithm [40] is based on the fact that in p spectral dimensions, the p-volume defined by a simplex formed by the purest pixels is larger than any other volume defined by any other combination of pixels. This algorithm finds the set of pixels defining the largest volume by inflating a simplex inside the data. ORA SIS [44, 45] is a hyperspectral framework developed by the U.S. Naval Research Laboratory consisting of several algorithms organized in six modules: exemplar selector, adaptative learner, demixer, knowledge base or spectral library, and spatial postrocessor. The first step consists in flat-fielding the spectra. Next, the exemplar selection module is used to select spectral vectors that best represent the smaller convex cone containing the data. The other pixels are rejected when the spectral angle distance (SAD) is less than a given thresh old. The procedure finds the basis for a subspace of a lower dimension using a modified Gram–Schmidt orthogonalizati on. The selected vectors are then projected onto this subspace and a simplex is found by an MV T pro cess. ORA SIS is oriented to real-time target detection from uncrewed air vehicles using hyperspectral data [46]. In this chapter we develop a new algorithm to unmix linear mixtures of endmember spectra. First, the algorithm determines the number of endmembers and the signal subspace using a newly developed concept [47, 48]. Second, the algorithm extracts the most pure pixels present in the data. Unlike other methods, this algorithm is completely automatic and unsupervised. To estimate the number of endmembers and the signal subspace in hyperspectral linear mixtures, the proposed scheme begins by estimating sign al and noise correlation matrices. The latter is based on multiple regression theory. The signal subspace is then identified by selectin g the set of signal eigenvalue s that best represents the data, in the least-square sense [48,49 ], we note, however, that VCA works with projected and with unprojected data. The extraction of the end members exploits two facts: (1) the endmembers are the vertices of a simplex and (2) the affine transformation of a simplex is also a simplex. As PPI and N-FIND R algorithms, VCA also assumes the presence of pure pixels in the data. The algorithm iteratively projects data on to a direction orthogonal to the subspace spanned by the endmembers already determined. The new end member signature corresponds to the extreme of the projection. The algorithm iterates until all end members are exhausted. VCA performs much better than PPI and better than or comparable to N-FI NDR; yet it has a computational complexity between on e and two orders of magnitude lower than N-FINDR. The chapter is structure d as follows. Section 19.2 describes the fundamentals of the proposed method. Section 19.3 and Section 19.4 evaluate the proposed algorithm using simulated and real data, respectively. Section 19.5 presents some concluding remarks.