979 resultados para joint probability generating function
Resumo:
The absolute numbers of total leukocytes, lymphocytes, T cells, helper/inducer, suppressor/cytotoxic and B cells were decreased in the peripheral blood of patients with chronic Chagas' disease. Since antilymphocyte antibodies were present only in a minority of patients they probably cannot account for the abnormalities in lymphocyte subsets. Patient neutrophils stimulated with endotoxin-treated autologous plasma showed depressed chemotactic activity and this seems to be an intrinsic cellular defect rather than plasma inhibition. Random migration of neutrophils was normal. Reduction of nitroblue tetrazolium by endotoxin- stimulated neutrophils was also decreased. These findings further document the presence of immunosuppression in human Chagas' disease. They may be relevant to autoimmunity, defense against microorganisms and against tumor cells at least in a subset of patients with more severe abnormalities.
Resumo:
Riscos Industriais e Emergentes, 2009 pp. 827-844
Resumo:
A double pi'npin heterostructure based on amorphous SiC has a non linear spectral gain which is a function of the signal wavelength that impinges on its front or back surface. An impulse of a configurable length and amplitude is applied to a 390 nm LED which illuminates one of the sensor surfaces, followed by a time period without any illumination after which an input signal with a different wavelength is impinged upon the front surface. Results show that the intensity and duration of the impulse illumination of the surfaces influences the sensor's response with different output for the same input signal. This paper studies this effect and proposes an application as a short term light memory. (C) 2015 Elsevier B.V. All rights reserved.
Resumo:
Recent changes in the operation and planning of power systems have been motivated by the introduction of Distributed Generation (DG) and Demand Response (DR) in the competitive electricity markets' environment, with deep concerns at the efficiency level. In this context, grid operators, market operators, utilities and consumers must adopt strategies and methods to take full advantage of demand response and distributed generation. This requires that all the involved players consider all the market opportunities, as the case of energy and reserve components of electricity markets. The present paper proposes a methodology which considers the joint dispatch of demand response and distributed generation in the context of a distribution network operated by a virtual power player. The resources' participation can be performed in both energy and reserve contexts. This methodology contemplates the probability of actually using the reserve and the distribution network constraints. Its application is illustrated in this paper using a 32-bus distribution network with 66 DG units and 218 consumers classified into 6 types of consumers.
Resumo:
An experimental and numerical investigation into the shear strength behaviour of adhesive single lap joints (SLJs) was carried out in order to understand the effect of temperature on the joint strength. The adherend material used for the experimental tests was an aluminium alloy in the form of thin sheets, and the adhesive used was a high-strength high temperature epoxy. Tensile tests as a function of temperature were performed and numerical predictions based on the use of a bilinear cohesive damage model were obtained. It is shown that at temperatures below Tg, the lap shear strength of SLJs increased, while at temperatures above Tg, a drastic drop in the lap shear strength was observed. Comparison between the experimental and numerical maximum loads representing the strength of the joints shows a reasonably good agreement.
Joint effects of salinity and the antidepressant sertraline on the estuarine decapod Carcinus maenas
Resumo:
Concurrent exposure of estuarine organisms to man-made and natural stressors has become a common occurrence. Numerous interactions of multiple stressors causing synergistic or antagonistic effects have been described. However, limited information is available on combined effects of emerging pharmaceuticals and natural stressors. This study investigated the joint effects of the antidepressant sertraline and salinity on Carcinus maenas. To improve knowledge about interactive effects and potential vulnerability,experiments were performed with organisms from two estuaries with differing histories of exposure to environmental contamination. Biomarkers related to mode of action of sertraline were employed to assess effects of environmentally realistic concentrations of sertraline at two salinity levels. Synergism and antagonism were identified for biomarkers of cholinergic neurotransmission, energy production,anti-oxidant defences and oxidative damage. Different interactions were found for the two study sites highlighting the need to account for differences in tolerance of local ecological receptors in risk evaluations.
Resumo:
Depression, the most prevalent psychiatric disorder, has a lifelong risk of 20% and is related to high rates of death among the patients. Thus, this study aims to conduct a systematic review of changes in executive functions of adult patients diagnosed with depression. We found 1381 articles; however, only 28 were selected and recovered. The inclusion criteria was the assessment of executive functions with at least one neuropsychological test, and articles that evaluated primarily adult individuals with depression, without comparison to other psychiatric disorders. Although most of the studies (25 out of 28 analyzed) have shown deficits in some executive subcomponents, these findings are not conclusive because they used different parameters of assessment. Moreover, many variables were not controlled, such as the different subtypes of the disorder, the high level of severity, comorbidity and the use of drugs. Most studies showed different deficits in executive functions in depressed patients, but further longitudinal studies are needed in order to confirm these findings.
Resumo:
The Chaves basin is a pull-apart tectonic depression implanted on granites, schists, and graywackes, and filled with a sedimentary sequence of variable thickness. It is a rather complex structure, as it includes an intricate network of faults and hydrogeological systems. The topography of the basement of the Chaves basin still remains unclear, as no drill hole has ever intersected the bottom of the sediments, and resistivity surveys suffer from severe equivalence issues resulting from the geological setting. In this work, a joint inversion approach of 1D resistivity and gravity data designed for layered environments is used to combine the consistent spatial distribution of the gravity data with the depth sensitivity of the resistivity data. A comparison between the results from the inversion of each data set individually and the results from the joint inversion show that although the joint inversion has more difficulty adjusting to the observed data, it provides more realistic and geologically meaningful models than the ones calculated by the inversion of each data set individually. This work provides a contribution for a better understanding of the Chaves basin, while using the opportunity to study further both the advantages and difficulties comprising the application of the method of joint inversion of gravity and resistivity data.
Resumo:
Dissertação para obtenção do grau de Mestre em Engenharia Electrotécnica Ramo Energia
Resumo:
Comunicação apresentada na 18th Conference International of Health Promotion Hospitals & Health Services "Tackling causes and consequences of inequalities in health: contributions of health services and the HPH network", em Manchester de 14-16 de april de 2010
Resumo:
Recently simple limiting functions establishing upper and lower bounds on the Mittag-Leffler function were found. This paper follows those expressions to design an efficient algorithm for the approximate calculation of expressions usual in fractional-order control systems. The numerical experiments demonstrate the superior efficiency of the proposed method.
Resumo:
Dissertation presented to obtain the PhD degree in Biochemistry at the Instituto de Tecnologia Química e Biológica, Universidade Nova de Lisboa
Resumo:
The development of high spatial resolution airborne and spaceborne sensors has improved the capability of ground-based data collection in the fields of agriculture, geography, geology, mineral identification, detection [2, 3], and classification [4–8]. The signal read by the sensor from a given spatial element of resolution and at a given spectral band is a mixing of components originated by the constituent substances, termed endmembers, located at that element of resolution. This chapter addresses hyperspectral unmixing, which is the decomposition of the pixel spectra into a collection of constituent spectra, or spectral signatures, and their corresponding fractional abundances indicating the proportion of each endmember present in the pixel [9, 10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. The linear mixing model holds when the mixing scale is macroscopic [13]. The nonlinear model holds when the mixing scale is microscopic (i.e., intimate mixtures) [14, 15]. The linear model assumes negligible interaction among distinct endmembers [16, 17]. The nonlinear model assumes that incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [18]. Under the linear mixing model and assuming that the number of endmembers and their spectral signatures are known, hyperspectral unmixing is a linear problem, which can be addressed, for example, under the maximum likelihood setup [19], the constrained least-squares approach [20], the spectral signature matching [21], the spectral angle mapper [22], and the subspace projection methods [20, 23, 24]. Orthogonal subspace projection [23] reduces the data dimensionality, suppresses undesired spectral signatures, and detects the presence of a spectral signature of interest. The basic concept is to project each pixel onto a subspace that is orthogonal to the undesired signatures. As shown in Settle [19], the orthogonal subspace projection technique is equivalent to the maximum likelihood estimator. This projection technique was extended by three unconstrained least-squares approaches [24] (signature space orthogonal projection, oblique subspace projection, target signature space orthogonal projection). Other works using maximum a posteriori probability (MAP) framework [25] and projection pursuit [26, 27] have also been applied to hyperspectral data. In most cases the number of endmembers and their signatures are not known. Independent component analysis (ICA) is an unsupervised source separation process that has been applied with success to blind source separation, to feature extraction, and to unsupervised recognition [28, 29]. ICA consists in finding a linear decomposition of observed data yielding statistically independent components. Given that hyperspectral data are, in given circumstances, linear mixtures, ICA comes to mind as a possible tool to unmix this class of data. In fact, the application of ICA to hyperspectral data has been proposed in reference 30, where endmember signatures are treated as sources and the mixing matrix is composed by the abundance fractions, and in references 9, 25, and 31–38, where sources are the abundance fractions of each endmember. In the first approach, we face two problems: (1) The number of samples are limited to the number of channels and (2) the process of pixel selection, playing the role of mixed sources, is not straightforward. In the second approach, ICA is based on the assumption of mutually independent sources, which is not the case of hyperspectral data, since the sum of the abundance fractions is constant, implying dependence among abundances. This dependence compromises ICA applicability to hyperspectral images. In addition, hyperspectral data are immersed in noise, which degrades the ICA performance. IFA [39] was introduced as a method for recovering independent hidden sources from their observed noisy mixtures. IFA implements two steps. First, source densities and noise covariance are estimated from the observed data by maximum likelihood. Second, sources are reconstructed by an optimal nonlinear estimator. Although IFA is a well-suited technique to unmix independent sources under noisy observations, the dependence among abundance fractions in hyperspectral imagery compromises, as in the ICA case, the IFA performance. Considering the linear mixing model, hyperspectral observations are in a simplex whose vertices correspond to the endmembers. Several approaches [40–43] have exploited this geometric feature of hyperspectral mixtures [42]. Minimum volume transform (MVT) algorithm [43] determines the simplex of minimum volume containing the data. The MVT-type approaches are complex from the computational point of view. Usually, these algorithms first find the convex hull defined by the observed data and then fit a minimum volume simplex to it. Aiming at a lower computational complexity, some algorithms such as the vertex component analysis (VCA) [44], the pixel purity index (PPI) [42], and the N-FINDR [45] still find the minimum volume simplex containing the data cloud, but they assume the presence in the data of at least one pure pixel of each endmember. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. Hyperspectral sensors collects spatial images over many narrow contiguous bands, yielding large amounts of data. For this reason, very often, the processing of hyperspectral data, included unmixing, is preceded by a dimensionality reduction step to reduce computational complexity and to improve the signal-to-noise ratio (SNR). Principal component analysis (PCA) [46], maximum noise fraction (MNF) [47], and singular value decomposition (SVD) [48] are three well-known projection techniques widely used in remote sensing in general and in unmixing in particular. The newly introduced method [49] exploits the structure of hyperspectral mixtures, namely the fact that spectral vectors are nonnegative. The computational complexity associated with these techniques is an obstacle to real-time implementations. To overcome this problem, band selection [50] and non-statistical [51] algorithms have been introduced. This chapter addresses hyperspectral data source dependence and its impact on ICA and IFA performances. The study consider simulated and real data and is based on mutual information minimization. Hyperspectral observations are described by a generative model. This model takes into account the degradation mechanisms normally found in hyperspectral applications—namely, signature variability [52–54], abundance constraints, topography modulation, and system noise. The computation of mutual information is based on fitting mixtures of Gaussians (MOG) to data. The MOG parameters (number of components, means, covariances, and weights) are inferred using the minimum description length (MDL) based algorithm [55]. We study the behavior of the mutual information as a function of the unmixing matrix. The conclusion is that the unmixing matrix minimizing the mutual information might be very far from the true one. Nevertheless, some abundance fractions might be well separated, mainly in the presence of strong signature variability, a large number of endmembers, and high SNR. We end this chapter by sketching a new methodology to blindly unmix hyperspectral data, where abundance fractions are modeled as a mixture of Dirichlet sources. This model enforces positivity and constant sum sources (full additivity) constraints. The mixing matrix is inferred by an expectation-maximization (EM)-type algorithm. This approach is in the vein of references 39 and 56, replacing independent sources represented by MOG with mixture of Dirichlet sources. Compared with the geometric-based approaches, the advantage of this model is that there is no need to have pure pixels in the observations. The chapter is organized as follows. Section 6.2 presents a spectral radiance model and formulates the spectral unmixing as a linear problem accounting for abundance constraints, signature variability, topography modulation, and system noise. Section 6.3 presents a brief resume of ICA and IFA algorithms. Section 6.4 illustrates the performance of IFA and of some well-known ICA algorithms with experimental data. Section 6.5 studies the ICA and IFA limitations in unmixing hyperspectral data. Section 6.6 presents results of ICA based on real data. Section 6.7 describes the new blind unmixing scheme and some illustrative examples. Section 6.8 concludes with some remarks.