982 resultados para Narrow-band frequency filters


Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper we propose the use of the least-squares based methods for obtaining digital rational approximations (IIR filters) to fractional-order integrators and differentiators of type sα, α∈R. Adoption of the Padé, Prony and Shanks techniques is suggested. These techniques are usually applied in the signal modeling of deterministic signals. These methods yield suboptimal solutions to the problem which only requires finding the solution of a set of linear equations. The results reveal that the least-squares approach gives similar or superior approximations in comparison with other widely used methods. Their effectiveness is illustrated, both in the time and frequency domains, as well in the fractional differintegration of some standard time domain functions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Dissertação apresentada na faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para a obtenção do grau de Mestre em Engenharia Electrotécnica e de Computadores

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Trabalho Final de Mestrado para obtenção do grau de Mestre em Engenharia Mecânica

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Proceedings of the Information Technology Applications in Biomedicine, Ioannina - Epirus, Greece, October 26-28, 2006

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper describes the implementation of a distributed model predictive approach for automatic generation control. Performance results are discussed by comparing classical techniques (based on integral control) with model predictive control solutions (centralized and distributed) for different operational scenarios with two interconnected networks. These scenarios include variable load levels (ranging from a small to a large unbalance generated power to power consumption ratio) and simultaneously variable distance between the interconnected networks systems. For the two networks the paper also examines the impact of load variation in an island context (a network isolated from each other).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The aim of this study is to evaluate lighting conditions and speleologists’ visual performance using optical filters when exposed to the lighting conditions of cave environments. A crosssectional study was conducted. Twenty-three speleologists were submitted to an evaluation of visual function in a clinical lab. An examination of visual acuity, contrast sensitivity, stereoacuity and flashlight illuminance levels was also performed in 16 of the 23 speleologists at two caves deprived of natural lightning. Two organic filters (450 nm and 550 nm) were used to compare visual function with and without filters. The mean age of the speleologists was 40.65 (± 10.93) years. We detected 26.1% participants with visual impairment of which refractive error (17.4%) was the major cause. In the cave environment the majority of the speleologists used a head flashlight with a mean illuminance of 451.0 ± 305.7 lux. Binocular visual acuity (BVA) was -0.05 ± 0.15 LogMAR (20/18). BVA for distance without filter was not statistically different from BVA with 550 nm or 450 nm filters (p = 0.093). Significant improved contrast sensitivity was observed with 450 nm filters for 6 cpd (p = 0.034) and 18 cpd (p = 0.026) spatial frequencies. There were no signs and symptoms of visual pathologies related to cave exposure. Illuminance levels were adequate to the majority of the activities performed. The enhancement in contrast sensitivity with filters could potentially improve tasks related with the activities performed in the cave.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Toluene hydrogenation was studied over catalysts based on Pt supported on large pore zeolites (HUSY and HBEA) with different metal/acid ratios. Acidity of zeolites was assessed by pyridine adsorption followed by FTIR showing only small changes before and after Pt introduction. Metal dispersion was determined by H2–O2 titration and verified by a linear correlation with the intensity of Pt0–CO band obtained by in situ FTIR. It was also observed that the electronic properties of Pt0 clusters were similar for the different catalysts. Catalytic tests showed rapid catalyst deactivation with an activity loss of 80–95% after 60 min of reaction. The turnover frequency of fresh catalysts depended both on metal dispersion and the support. For the same support, it changed by a 1.7-fold (HBEA) and 4.0-fold (HUSY) showing that toluene hydrogenation is structure-sensitive, i.e. hydrogenating activity is not a unique function of accessible metal. This was proposed to be due to the contribution to the overall activity of the hydrogenation of adsorbed toluene on acid sites via hydrogen spillover. Taking into account the role of zeolite acidity, the catalysts series were compared by the activity per total adsorbing sites which was observed to increase steadily with nPt/(nPt + nA). An increase of the accessible Pt atoms leads to an increase on the amount of spilled over hydrogen available in acid sites therefore increasing the overall activity. Pt/HBEA catalysts were found to be more active per total adsorbing site than Pt/HUSY which is proposed to be due to an augmentation in the efficiency of spilled over hydrogen diffusion related to the proximity between Pt clusters and acid sites. The intervention of Lewis acid sites in a greater extent than that measured by pyridine adsorption may also contribute to this higher activity of Pt/HBEA catalysts. These results reinforce the importance of model reactions as a closer perspective to the relevant catalyst properties in reaction conditions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Somatic mutations in the promoter region of telomerase reverse transcriptase (TERT) gene, mainly at positions c.-124 and c.-146 bp, are frequent in several human cancers; yet its presence in gastrointestinal stromal tumor (GIST) has not been reported to date. Herein, we searched for the presence and clinicopathological association of TERT promoter mutations in genomic DNA from 130 bona fide GISTs. We found TERT promoter mutations in 3.8% (5/130) of GISTs. The c.-124C>T mutation was the most common event, present in 2.3% (3/130), and the c.-146C>T mutation in 1.5% (2/130) of GISTs. No significant association was observed between TERT promoter mutation and patient's clinicopathological features. The present study establishes the low frequency (4%) of TERT promoter mutations in GISTs. Further studies are required to confirm our findings and to elucidate the hypothetical biological and clinical impact of TERT promoter mutation in GIST pathogenesis.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

P and S receiver functions (PRF and SRF) from 19 seismograph stations in the Gibraltar Arc and the Iberian Massif reveal new details of the regional deep structure. Within the high-velocity mantle body below southern Spain the 660-km discontinuity is depressed by at least 20 km. The Ps phase from the 410-km discontinuity is missing at most stations in the Gibraltar Arc. A thin (similar to 50 km) low-S-velocity layer atop the 410-km discontinuity is found under the Atlantic margin. At most stations the S410p phase in the SRFs arrives 1.0-2.5 s earlier than predicted by IASP91 model, but, for the propagation paths through the upper mantle below southern Spain, the arrivals of S410p are delayed by up to +1.5 s. The early arrivals can be explained by elevated Vp/Vs ratio in the upper mantle or by a depressed 410-km discontinuity. The positive residuals are indicative of a low (similar to 1.7 versus similar to 1.8 in IASP91) Vp/Vs ratio. Previously, the low ratio was found in depleted lithosphere of Precambrian cratons. From simultaneous inversion of the PRFs and SRFs we recognize two types of the mantle: 'continental' and 'oceanic'. In the 'continental' upper mantle the S-wave velocity in the high-velocity lid is 4.4-4.5 km s(-1), the S-velocity contrast between the lid and the underlying mantle is often near the limit of resolution (0.1 km s(-1)), and the bottom of the lid is at a depth reaching 90 100 km. In the 'oceanic' domain, the S-wave velocities in the lid and the underlying mantle are typically 4.2-4.3 and similar to 4.0 km s(-1), respectively. The bottom of the lid is at a shallow depth (around 50 km), and at some locations the lid is replaced by a low S-wave velocity layer. The narrow S-N-oriented band of earthquakes at depths from 70 to 120 km in the Alboran Sea is in the 'continental' domain, near the boundary between the 'continental' and 'oceanic' domains, and the intermediate seismicity may be an effect of ongoing destruction of the continental lithosphere.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The frequency of viral markers for hepatitis B (HBV) and C (HCV), human immunodeficiency virus-1 (HIV-1) and human T-lymphotropic virus-1 (HTLV-1) was evaluated in 32 Brazilian ß-thalassemia multitransfused patients. Additionaly the serum concentrations of ferritin and alanine aspartate transaminase (ALAT) were determined. The results show a high prevalence of markers of infection by HBV (25.0%) and HCV (46.8%) and a low prevalence of markers for HIV-1 and HTLV-1. No correlations were demonstrated between the presence of the hepatitis markers and the number of units transfused or the serum concentrations of ferritin and ALAT.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Trabalho final de Mestrado para obtenção do grau de Mestre em Engenharia de Electrónica e Telecomunicações

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Wireless communications had a great development in the last years and nowadays they are present everywhere, public and private, being increasingly used for different applications. Their application in the business of sports events as a means to improve the experience of the fans at the games is becoming essential, such as sharing messages and multimedia material on social networks. In the stadiums, given the high density of people, the wireless networks require very large data capacity. Hence radio coverage employing many small sized sectors is unavoidable. In this paper, an antenna is designed to operate in the Wi-Fi 5GHz frequency band, with a directive radiation pattern suitable to this kind of applications. Furthermore, despite the large bandwidth and low losses, this antenna has been developed using low cost, off-the-shelf materials without sacrificing quality or performance, essential to mass production. © 2015 EurAAP.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Risk Based Inspection (RBI) is a risk methodology used as the basis for prioritizing and managing the efforts for an inspection program allowing the allocation of resources to provide a higher level of coverage on physical assets with higher risk. The main goal of RBI is to increase equipment availability while improving or maintaining the accepted level of risk. This paper presents the concept of risk, risk analysis and RBI methodology and shows an approach to determine the optimal inspection frequency for physical assets based on the potential risk and mainly on the quantification of the probability of failure. It makes use of some assumptions in a structured decision making process. The proposed methodology allows an optimization of inspection intervals deciding when the first inspection must be performed as well as the subsequent intervals of inspection. A demonstrative example is also presented to illustrate the application of the proposed methodology.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

As it is widely known, in structural dynamic applications, ranging from structural coupling to model updating, the incompatibility between measured and simulated data is inevitable, due to the problem of coordinate incompleteness. Usually, the experimental data from conventional vibration testing is collected at a few translational degrees of freedom (DOF) due to applied forces, using hammer or shaker exciters, over a limited frequency range. Hence, one can only measure a portion of the receptance matrix, few columns, related to the forced DOFs, and rows, related to the measured DOFs. In contrast, by finite element modeling, one can obtain a full data set, both in terms of DOFs and identified modes. Over the years, several model reduction techniques have been proposed, as well as data expansion ones. However, the latter are significantly fewer and the demand for efficient techniques is still an issue. In this work, one proposes a technique for expanding measured frequency response functions (FRF) over the entire set of DOFs. This technique is based upon a modified Kidder's method and the principle of reciprocity, and it avoids the need for modal identification, as it uses the measured FRFs directly. In order to illustrate the performance of the proposed technique, a set of simulated experimental translational FRFs is taken as reference to estimate rotational FRFs, including those that are due to applied moments.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The development of high spatial resolution airborne and spaceborne sensors has improved the capability of ground-based data collection in the fields of agriculture, geography, geology, mineral identification, detection [2, 3], and classification [4–8]. The signal read by the sensor from a given spatial element of resolution and at a given spectral band is a mixing of components originated by the constituent substances, termed endmembers, located at that element of resolution. This chapter addresses hyperspectral unmixing, which is the decomposition of the pixel spectra into a collection of constituent spectra, or spectral signatures, and their corresponding fractional abundances indicating the proportion of each endmember present in the pixel [9, 10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. The linear mixing model holds when the mixing scale is macroscopic [13]. The nonlinear model holds when the mixing scale is microscopic (i.e., intimate mixtures) [14, 15]. The linear model assumes negligible interaction among distinct endmembers [16, 17]. The nonlinear model assumes that incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [18]. Under the linear mixing model and assuming that the number of endmembers and their spectral signatures are known, hyperspectral unmixing is a linear problem, which can be addressed, for example, under the maximum likelihood setup [19], the constrained least-squares approach [20], the spectral signature matching [21], the spectral angle mapper [22], and the subspace projection methods [20, 23, 24]. Orthogonal subspace projection [23] reduces the data dimensionality, suppresses undesired spectral signatures, and detects the presence of a spectral signature of interest. The basic concept is to project each pixel onto a subspace that is orthogonal to the undesired signatures. As shown in Settle [19], the orthogonal subspace projection technique is equivalent to the maximum likelihood estimator. This projection technique was extended by three unconstrained least-squares approaches [24] (signature space orthogonal projection, oblique subspace projection, target signature space orthogonal projection). Other works using maximum a posteriori probability (MAP) framework [25] and projection pursuit [26, 27] have also been applied to hyperspectral data. In most cases the number of endmembers and their signatures are not known. Independent component analysis (ICA) is an unsupervised source separation process that has been applied with success to blind source separation, to feature extraction, and to unsupervised recognition [28, 29]. ICA consists in finding a linear decomposition of observed data yielding statistically independent components. Given that hyperspectral data are, in given circumstances, linear mixtures, ICA comes to mind as a possible tool to unmix this class of data. In fact, the application of ICA to hyperspectral data has been proposed in reference 30, where endmember signatures are treated as sources and the mixing matrix is composed by the abundance fractions, and in references 9, 25, and 31–38, where sources are the abundance fractions of each endmember. In the first approach, we face two problems: (1) The number of samples are limited to the number of channels and (2) the process of pixel selection, playing the role of mixed sources, is not straightforward. In the second approach, ICA is based on the assumption of mutually independent sources, which is not the case of hyperspectral data, since the sum of the abundance fractions is constant, implying dependence among abundances. This dependence compromises ICA applicability to hyperspectral images. In addition, hyperspectral data are immersed in noise, which degrades the ICA performance. IFA [39] was introduced as a method for recovering independent hidden sources from their observed noisy mixtures. IFA implements two steps. First, source densities and noise covariance are estimated from the observed data by maximum likelihood. Second, sources are reconstructed by an optimal nonlinear estimator. Although IFA is a well-suited technique to unmix independent sources under noisy observations, the dependence among abundance fractions in hyperspectral imagery compromises, as in the ICA case, the IFA performance. Considering the linear mixing model, hyperspectral observations are in a simplex whose vertices correspond to the endmembers. Several approaches [40–43] have exploited this geometric feature of hyperspectral mixtures [42]. Minimum volume transform (MVT) algorithm [43] determines the simplex of minimum volume containing the data. The MVT-type approaches are complex from the computational point of view. Usually, these algorithms first find the convex hull defined by the observed data and then fit a minimum volume simplex to it. Aiming at a lower computational complexity, some algorithms such as the vertex component analysis (VCA) [44], the pixel purity index (PPI) [42], and the N-FINDR [45] still find the minimum volume simplex containing the data cloud, but they assume the presence in the data of at least one pure pixel of each endmember. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. Hyperspectral sensors collects spatial images over many narrow contiguous bands, yielding large amounts of data. For this reason, very often, the processing of hyperspectral data, included unmixing, is preceded by a dimensionality reduction step to reduce computational complexity and to improve the signal-to-noise ratio (SNR). Principal component analysis (PCA) [46], maximum noise fraction (MNF) [47], and singular value decomposition (SVD) [48] are three well-known projection techniques widely used in remote sensing in general and in unmixing in particular. The newly introduced method [49] exploits the structure of hyperspectral mixtures, namely the fact that spectral vectors are nonnegative. The computational complexity associated with these techniques is an obstacle to real-time implementations. To overcome this problem, band selection [50] and non-statistical [51] algorithms have been introduced. This chapter addresses hyperspectral data source dependence and its impact on ICA and IFA performances. The study consider simulated and real data and is based on mutual information minimization. Hyperspectral observations are described by a generative model. This model takes into account the degradation mechanisms normally found in hyperspectral applications—namely, signature variability [52–54], abundance constraints, topography modulation, and system noise. The computation of mutual information is based on fitting mixtures of Gaussians (MOG) to data. The MOG parameters (number of components, means, covariances, and weights) are inferred using the minimum description length (MDL) based algorithm [55]. We study the behavior of the mutual information as a function of the unmixing matrix. The conclusion is that the unmixing matrix minimizing the mutual information might be very far from the true one. Nevertheless, some abundance fractions might be well separated, mainly in the presence of strong signature variability, a large number of endmembers, and high SNR. We end this chapter by sketching a new methodology to blindly unmix hyperspectral data, where abundance fractions are modeled as a mixture of Dirichlet sources. This model enforces positivity and constant sum sources (full additivity) constraints. The mixing matrix is inferred by an expectation-maximization (EM)-type algorithm. This approach is in the vein of references 39 and 56, replacing independent sources represented by MOG with mixture of Dirichlet sources. Compared with the geometric-based approaches, the advantage of this model is that there is no need to have pure pixels in the observations. The chapter is organized as follows. Section 6.2 presents a spectral radiance model and formulates the spectral unmixing as a linear problem accounting for abundance constraints, signature variability, topography modulation, and system noise. Section 6.3 presents a brief resume of ICA and IFA algorithms. Section 6.4 illustrates the performance of IFA and of some well-known ICA algorithms with experimental data. Section 6.5 studies the ICA and IFA limitations in unmixing hyperspectral data. Section 6.6 presents results of ICA based on real data. Section 6.7 describes the new blind unmixing scheme and some illustrative examples. Section 6.8 concludes with some remarks.