58 resultados para Normally Complemented Subgroups


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Environment monitoring has an important role in occupational exposure assessment. However, due to several factors is done with insufficient frequency and normally don´t give the necessary information to choose the most adequate safety measures to avoid or control exposure. Identifying all the tasks developed in each workplace and conducting a task-based exposure assessment help to refine the exposure characterization and reduce assessment errors. A task-based assessment can provide also a better evaluation of exposure variability, instead of assessing personal exposures using continuous 8-hour time weighted average measurements. Health effects related with exposure to particles have mainly been investigated with mass-measuring instruments or gravimetric analysis. However, more recently, there are some studies that support that size distribution and particle number concentration may have advantages over particle mass concentration for assessing the health effects of airborne particles. Several exposure assessments were performed in different occupational settings (bakery, grill house, cork industry and horse stable) and were applied these two resources: task-based exposure assessment and particle number concentration by size. The results showed interesting results: task-based approach applied permitted to identify the tasks with higher exposure to the smaller particles (0.3 μm) in the different occupational settings. The data obtained allow more concrete and effective risk assessment and the identification of priorities for safety investments.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

To study a flavour model with a non-minimal Higgs sector one must first define the symmetries of the fields; then identify what types of vacua exist and how they may break the symmetries; and finally determine whether the remnant symmetries are compatible with the experimental data. Here we address all these issues in the context of flavour models with any number of Higgs doublets. We stress the importance of analysing the Higgs vacuum expectation values that are pseudo-invariant under the generators of all subgroups. It is shown that the only way of obtaining a physical CKM mixing matrix and, simultaneously, non-degenerate and non-zero quark masses is requiring the vacuum expectation values of the Higgs fields to break completely the full flavour group, except possibly for some symmetry belonging to baryon number. The application of this technique to some illustrative examples, such as the flavour groups Delta (27), A(4) and S-3, is also presented.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A detailed analysis of fabrics of the chilled margin of a thick dolerite dyke (Foum Zguid dyke, Southern Morocco) was performed in order to better understand the development of sub-fabrics during dyke emplacement and cooling. AMS data were complemented with measurements of paramagnetic and ferrimagnetic fabrics (measured with high field torque magnetometer), neutron texture and microstructural analyses. The ferrimagnetic and AMS fabrics are similar, indicating that the ferrimagnetic minerals dominate the AMS signal. The paramagnetic fabric is different from the previous ones. Based on the crystallization timing of the different mineralogical phases, the paramagnetic fabric appears related to the upward flow, while the ferrimagnetic fabric rather reflects the late-stage of dyke emplacement and cooling stresses. (C) 2014 Elsevier B.V. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The aim of this study was the assessment of exposure to ultrafine in the urban environment of Lisbon, Portugal, due to automobile traffic, and consisted of the determination of deposited alveolar surface area in an avenue leading to the town center during late spring. This study revealed differentiated patterns for weekdays and weekends, which could be related with the fluxes of automobile traffic. During a typical week, ultrafine particles alveolar deposited surface area varied between 35.0 and 89.2 mu m(2)/cm(3), which is comparable with levels reported for other towns such in Germany and the United States. These measurements were also complemented by measuring the electrical mobility diameter (varying from 18.3 to 128.3 nm) and number of particles that showed higher values than those previously reported for Madrid and Brisbane. Also, electron microscopy showed that the collected particles were composed of carbonaceous agglomerates, typical of particles emitted by the exhaustion of diesel vehicles. Implications: The approach of this study considers the measurement of surface deposited alveolar area of particles in the outdoor urban environment of Lisbon, Portugal. This type of measurements has not been done so far. Only particulate matter with aerodynamic diameters <2.5 (PM2.5) and >10 (PM10) mu m have been measured in outdoor environments and the levels found cannot be found responsible for all the observed health effects. Therefore, the exposure to nano- and ultrafine particles has not been assessed systematically, and several authors consider this as a real knowledge gap and claim for data such as these that will allow for deriving better and more comprehensive epidemiologic studies. Nanoparticle surface area monitor (NSAM) equipments are recent ones and their use has been limited to indoor atmospheres. However, as this study shows, NSAM is a very powerful tool for outdoor environments also. As most lung diseases are, in fact, related to deposition of the alveolar region of the lung, the metric used in this study is the ideal one.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We numerically study a simple fluid composed of particles having a hard-core repulsion complemented by two patchy attractive sites on the particle poles. An appropriate choice of the patch angular width allows for the formation of ring structures which, at low temperatures and low densities, compete with the growth of linear aggregates. The simplicity of the model makes it possible to compare simulation results and theoretical predictions based on the Wertheim perturbation theory, specialized to the case in which ring formation is allowed. Such a comparison offers a unique framework for establishing the quality of the analytic predictions. We find that the Wertheim theory describes remarkably well the simulation results.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Relatório de Estágio apresentado à Escola Superior de Educação de Lisboa para obtenção de grau de mestre em Ensino do 1.º e 2.º Ciclo

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Trabalho Final de Mestrado para obtenção do grau de Mestre em Engenharia Civil na Área de Edificações

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Trabalho Final de Mestrado para obtenção do grau de Mestre em Engenharia Mecânica

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Mestrado em Tecnologia de Diagnóstico e Intervenção Cardiovascular - Área de especialização: Ultrassonografia cardiovascular

Relevância:

10.00% 10.00%

Publicador:

Resumo:

An abstract theory on general synchronization of a system of several oscillators coupled by a medium is given. By generalized synchronization we mean the existence of an invariant manifold that allows a reduction in dimension. The case of a concrete system modeling the dynamics of a chemical solution on two containers connected to a third container is studied from the basics to arbitrary perturbations. Conditions under which synchronization occurs are given. Our theoretical results are complemented with a numerical study.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper focuses on a PV system linked to the electric grid by power electronic converters, identification of the five parameters modeling for photovoltaic systems and the assessment of the shading effect. Normally, the technical information for photovoltaic panels is too restricted to identify the five parameters. An undemanding heuristic method is used to find the five parameters for photovoltaic systems, requiring only the open circuit, maximum power, and short circuit data. The I- V and the P- V curves for a monocrystalline, polycrystalline and amorphous photovoltaic systems are computed from the parameters identification and validated by comparison with experimental ones. Also, the I- V and the P- V curves under the effect of partial shading are obtained from those parameters. The modeling for the converters emulates the association of a DC-DC boost with a two-level power inverter in order to follow the performance of a testing commercial inverter employed on an experimental system. © 2015 Elsevier Ltd.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The development of high spatial resolution airborne and spaceborne sensors has improved the capability of ground-based data collection in the fields of agriculture, geography, geology, mineral identification, detection [2, 3], and classification [4–8]. The signal read by the sensor from a given spatial element of resolution and at a given spectral band is a mixing of components originated by the constituent substances, termed endmembers, located at that element of resolution. This chapter addresses hyperspectral unmixing, which is the decomposition of the pixel spectra into a collection of constituent spectra, or spectral signatures, and their corresponding fractional abundances indicating the proportion of each endmember present in the pixel [9, 10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. The linear mixing model holds when the mixing scale is macroscopic [13]. The nonlinear model holds when the mixing scale is microscopic (i.e., intimate mixtures) [14, 15]. The linear model assumes negligible interaction among distinct endmembers [16, 17]. The nonlinear model assumes that incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [18]. Under the linear mixing model and assuming that the number of endmembers and their spectral signatures are known, hyperspectral unmixing is a linear problem, which can be addressed, for example, under the maximum likelihood setup [19], the constrained least-squares approach [20], the spectral signature matching [21], the spectral angle mapper [22], and the subspace projection methods [20, 23, 24]. Orthogonal subspace projection [23] reduces the data dimensionality, suppresses undesired spectral signatures, and detects the presence of a spectral signature of interest. The basic concept is to project each pixel onto a subspace that is orthogonal to the undesired signatures. As shown in Settle [19], the orthogonal subspace projection technique is equivalent to the maximum likelihood estimator. This projection technique was extended by three unconstrained least-squares approaches [24] (signature space orthogonal projection, oblique subspace projection, target signature space orthogonal projection). Other works using maximum a posteriori probability (MAP) framework [25] and projection pursuit [26, 27] have also been applied to hyperspectral data. In most cases the number of endmembers and their signatures are not known. Independent component analysis (ICA) is an unsupervised source separation process that has been applied with success to blind source separation, to feature extraction, and to unsupervised recognition [28, 29]. ICA consists in finding a linear decomposition of observed data yielding statistically independent components. Given that hyperspectral data are, in given circumstances, linear mixtures, ICA comes to mind as a possible tool to unmix this class of data. In fact, the application of ICA to hyperspectral data has been proposed in reference 30, where endmember signatures are treated as sources and the mixing matrix is composed by the abundance fractions, and in references 9, 25, and 31–38, where sources are the abundance fractions of each endmember. In the first approach, we face two problems: (1) The number of samples are limited to the number of channels and (2) the process of pixel selection, playing the role of mixed sources, is not straightforward. In the second approach, ICA is based on the assumption of mutually independent sources, which is not the case of hyperspectral data, since the sum of the abundance fractions is constant, implying dependence among abundances. This dependence compromises ICA applicability to hyperspectral images. In addition, hyperspectral data are immersed in noise, which degrades the ICA performance. IFA [39] was introduced as a method for recovering independent hidden sources from their observed noisy mixtures. IFA implements two steps. First, source densities and noise covariance are estimated from the observed data by maximum likelihood. Second, sources are reconstructed by an optimal nonlinear estimator. Although IFA is a well-suited technique to unmix independent sources under noisy observations, the dependence among abundance fractions in hyperspectral imagery compromises, as in the ICA case, the IFA performance. Considering the linear mixing model, hyperspectral observations are in a simplex whose vertices correspond to the endmembers. Several approaches [40–43] have exploited this geometric feature of hyperspectral mixtures [42]. Minimum volume transform (MVT) algorithm [43] determines the simplex of minimum volume containing the data. The MVT-type approaches are complex from the computational point of view. Usually, these algorithms first find the convex hull defined by the observed data and then fit a minimum volume simplex to it. Aiming at a lower computational complexity, some algorithms such as the vertex component analysis (VCA) [44], the pixel purity index (PPI) [42], and the N-FINDR [45] still find the minimum volume simplex containing the data cloud, but they assume the presence in the data of at least one pure pixel of each endmember. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. Hyperspectral sensors collects spatial images over many narrow contiguous bands, yielding large amounts of data. For this reason, very often, the processing of hyperspectral data, included unmixing, is preceded by a dimensionality reduction step to reduce computational complexity and to improve the signal-to-noise ratio (SNR). Principal component analysis (PCA) [46], maximum noise fraction (MNF) [47], and singular value decomposition (SVD) [48] are three well-known projection techniques widely used in remote sensing in general and in unmixing in particular. The newly introduced method [49] exploits the structure of hyperspectral mixtures, namely the fact that spectral vectors are nonnegative. The computational complexity associated with these techniques is an obstacle to real-time implementations. To overcome this problem, band selection [50] and non-statistical [51] algorithms have been introduced. This chapter addresses hyperspectral data source dependence and its impact on ICA and IFA performances. The study consider simulated and real data and is based on mutual information minimization. Hyperspectral observations are described by a generative model. This model takes into account the degradation mechanisms normally found in hyperspectral applications—namely, signature variability [52–54], abundance constraints, topography modulation, and system noise. The computation of mutual information is based on fitting mixtures of Gaussians (MOG) to data. The MOG parameters (number of components, means, covariances, and weights) are inferred using the minimum description length (MDL) based algorithm [55]. We study the behavior of the mutual information as a function of the unmixing matrix. The conclusion is that the unmixing matrix minimizing the mutual information might be very far from the true one. Nevertheless, some abundance fractions might be well separated, mainly in the presence of strong signature variability, a large number of endmembers, and high SNR. We end this chapter by sketching a new methodology to blindly unmix hyperspectral data, where abundance fractions are modeled as a mixture of Dirichlet sources. This model enforces positivity and constant sum sources (full additivity) constraints. The mixing matrix is inferred by an expectation-maximization (EM)-type algorithm. This approach is in the vein of references 39 and 56, replacing independent sources represented by MOG with mixture of Dirichlet sources. Compared with the geometric-based approaches, the advantage of this model is that there is no need to have pure pixels in the observations. The chapter is organized as follows. Section 6.2 presents a spectral radiance model and formulates the spectral unmixing as a linear problem accounting for abundance constraints, signature variability, topography modulation, and system noise. Section 6.3 presents a brief resume of ICA and IFA algorithms. Section 6.4 illustrates the performance of IFA and of some well-known ICA algorithms with experimental data. Section 6.5 studies the ICA and IFA limitations in unmixing hyperspectral data. Section 6.6 presents results of ICA based on real data. Section 6.7 describes the new blind unmixing scheme and some illustrative examples. Section 6.8 concludes with some remarks.