992 resultados para Signal variability
Resumo:
Myocardial perfusion-gated-SPECT (MP-gated-SPECT) imaging often shows radiotracer uptake in abdominal organs. This accumulation interferes frequently with qualitative and quantitative assessment of the infero-septal region of myocardium. The objective of this study is to evaluate the effect of ingestion of different fat content on the reduction of extra-myocardial uptake and to improve MP-gated-SPECT image quality. In this study, 150 patients (65 ^ 18 years) who were referred for MP-gated-SPECT underwent a 1-day-protocol including imaging after stress (physical or pharmacological) and resting conditions. All patients gave written informed consent. Patients were subdivided into five groups: GI, GII, GIII, GIV and GV. In the first four groups, patients ate two chocolate bars with different fat content. Patients in GV – control group (CG) – had just water. Uptake indices (UI) of myocardium (M)/liver(L) and M/stomach–proximal bowel(S) revealed lower UI of M/S at rest in all groups. Both stress and rest studies using different food intake indicate that patients who ate chocolate with different fat content showed better UI of M/L than the CG. The UI of M/L and M/S of groups obtained under physical stress are clearly superior to that of groups obtained under pharmacological stress. These differences are only significant in patients who ate high-fat chocolate or drank water. The analysis of all stress studies together (GI, GII, GIII and GIV) in comparison with CG shows higher mean ranks of UI of M/L for those who ate high-fat chocolate. After pharmacological stress, the mean ranks of UI of M/L were higher for patients who ate high- and low-fat chocolate. In conclusion, eating food with fat content after radiotracer injection increases, respectively, the UI of M/L after stress and rest in MP-gated-SPECT studies. It is, therefore, recommended that patients eat a chocolate bar after radiotracer injection and before image acquisition.
Resumo:
Dissertação de Mestrado, Estudos Integrados dos Oceanos, 20 de Março de 2014, Universidade dos Açores.
Resumo:
World Congress of Malacology, Universidade dos Açores, Ponta Delgada, 21-28 de julho.
Resumo:
The present paper reports the amount and estimated daily mineral intake of nine elements (Ca, Mg, K, Na, P, Fe, Mn, Cr and Ni) in commercial instant coffees and coffee substitutes (n = 49). Elements were quantified by high-resolution continuum source flame (HR-CS-FAAS) and graphite furnace (HR-CS-GFAAS) atomic absorption spectrometry, while phosphorous was evaluated by a standard vanadomolybdophosphoric acid colorimetric method. Instant coffees and coffee substitutes are rich in K, Mg and P (>100 mg/100 g dw), contain Na, Ca and Fe in moderate amounts (>1 mg/100 g), and trace levels of Cr and Ni. Among the samples analysed, plain instant coffees are richer in minerals (p < 0.001), except for Na and Cr. Blends of coffee substitutes (barley, malt, chicory and rye) with coffee (20–66%) present intermediate amounts, while lower quantities are found in substitutes without coffee, particularly in barley. From a nutritional point of view the results indicate that the mean ingestion of two instant beverages per day (total of 4 g instant powder), either with or without coffee, cannot be regarded as important sources of minerals to the human diet, although providing a supplementation of some minerals, particularly Mg and Mn in instant coffees. Additionally, and for authentication purposes, the correlations observed between some elements and the coffee percentage in the blends, with particular significance for Mg amounts, provides a potential tool for the estimation of coffee in substitute blends.
Resumo:
In this work, a microwave-assisted extraction (MAE) methodology was compared with several conventional extraction methods (Soxhlet, Bligh & Dyer, modified Bligh & Dyer, Folch, modified Folch, Hara & Radin, Roese-Gottlieb) for quantification of total lipid content of three fish species: horse mackerel (Trachurus trachurus), chub mackerel (Scomber japonicus), and sardine (Sardina pilchardus). The influence of species, extraction method and frozen storage time (varying from fresh to 9 months of freezing) on total lipid content was analysed in detail. The efficiencies of methods MAE, Bligh & Dyer, Folch, modified Folch and Hara & Radin were the highest and although they were not statistically different, differences existed in terms of variability, with MAE showing the highest repeatability (CV = 0.034). Roese-Gottlieb, Soxhlet, and modified Bligh & Dyer methods were very poor in terms of efficiency as well as repeatability (CV between 0.13 and 0.18).
Resumo:
An Electrocardiogram (ECG) monitoring system deals with several challenges related with noise sources. The main goal of this text was the study of Adaptive Signal Processing Algorithms for ECG noise reduction when applied to real signals. This document presents an adaptive ltering technique based on Least Mean Square (LMS) algorithm to remove the artefacts caused by electromyography (EMG) and power line noise into ECG signal. For this experiments it was used real noise signals, mainly to observe the di erence between real noise and simulated noise sources. It was obtained very good results due to the ability of noise removing that can be reached with this technique. A recolha de sinais electrocardiogr a cos (ECG) sofre de diversos problemas relacionados com ru dos. O objectivo deste trabalho foi o estudo de algoritmos adaptativos para processamento digital de sinal, para redu c~ao de ru do em sinais ECG reais. Este texto apresenta uma t ecnica de redu c~ao de ru do baseada no algoritmo Least Mean Square (LMS) para remo c~ao de ru dos causados quer pela actividade muscular (EMG) quer por ru dos causados pela rede de energia el ectrica. Para as experiencias foram utilizados ru dos reais, principalmente para aferir a diferen ca de performance do algoritmo entre os sinais reais e os simulados. Foram conseguidos bons resultados, essencialmente devido as excelentes caracter sticas que esta t ecnica tem para remover ru dos.
Resumo:
Independent component analysis (ICA) has recently been proposed as a tool to unmix hyperspectral data. ICA is founded on two assumptions: 1) the observed spectrum vector is a linear mixture of the constituent spectra (endmember spectra) weighted by the correspondent abundance fractions (sources); 2)sources are statistically independent. Independent factor analysis (IFA) extends ICA to linear mixtures of independent sources immersed in noise. Concerning hyperspectral data, the first assumption is valid whenever the multiple scattering among the distinct constituent substances (endmembers) is negligible, and the surface is partitioned according to the fractional abundances. The second assumption, however, is violated, since the sum of abundance fractions associated to each pixel is constant due to physical constraints in the data acquisition process. Thus, sources cannot be statistically independent, this compromising the performance of ICA/IFA algorithms in hyperspectral unmixing. This paper studies the impact of hyperspectral source statistical dependence on ICA and IFA performances. We conclude that the accuracy of these methods tends to improve with the increase of the signature variability, of the number of endmembers, and of the signal-to-noise ratio. In any case, there are always endmembers incorrectly unmixed. We arrive to this conclusion by minimizing the mutual information of simulated and real hyperspectral mixtures. The computation of mutual information is based on fitting mixtures of Gaussians to the observed data. A method to sort ICA and IFA estimates in terms of the likelihood of being correctly unmixed is proposed.
Resumo:
Chpater in Book Proceedings with Peer Review Second Iberian Conference, IbPRIA 2005, Estoril, Portugal, June 7-9, 2005, Proceedings, Part II
Resumo:
Three commonly consumed and commercially valuable fish species (sardine, chub and horse mackerel) were collected from the Northeast and Eastern Central Atlantic Ocean in Portuguese waters during one year. Mercury, cadmium, lead and arsenic amounts were determined in muscles using graphite furnace and cold vapour atomic absorption spectrometry. Maximum mean levels of mercury (0.1715 ± 0.0857 mg/kg, ww) and arsenic (1.139 ± 0.350 mg/kg, ww) were detected in horse mackerel. The higher mean amounts of cadmium (0.0084 ± 0.0036 mg/kg, ww) and lead (0.0379 ± 0.0303 mg/kg, ww) were determined in chub mackerel and in sardine, respectively. Intra- and inter-specific variability of metals bioaccumulation was statistically assessed and species and length revealed to be the major influencing biometric factors, in particular for mercury and arsenic. Muscles present metal concentrations below the tolerable limits considered by European Commission Regulation and Food and Agriculture Organization of the United Nations/World Health Organization (FAO/WHO). However, estimation of non-carcinogenic and carcinogenic health risks by the target hazard quotient and target carcinogenic risk, established by the US Environmental Protection Agency, suggests that these species must be eaten in moderation due to possible hazard and carcinogenic risks derived from arsenic (in all analyzed species) and mercury ingestion (in horse and chub mackerel species).
Resumo:
Electrocardiography (ECG) biometrics is emerging as a viable biometric trait. Recent developments at the sensor level have shown the feasibility of performing signal acquisition at the fingers and hand palms, using one-lead sensor technology and dry electrodes. These new locations lead to ECG signals with lower signal to noise ratio and more prone to noise artifacts; the heart rate variability is another of the major challenges of this biometric trait. In this paper we propose a novel approach to ECG biometrics, with the purpose of reducing the computational complexity and increasing the robustness of the recognition process enabling the fusion of information across sessions. Our approach is based on clustering, grouping individual heartbeats based on their morphology. We study several methods to perform automatic template selection and account for variations observed in a person's biometric data. This approach allows the identification of different template groupings, taking into account the heart rate variability, and the removal of outliers due to noise artifacts. Experimental evaluation on real world data demonstrates the advantages of our approach.
Resumo:
The potential of the electrocardiographic (ECG) signal as a biometric trait has been ascertained in the literature over the past decade. The inherent characteristics of the ECG make it an interesting biometric modality, given its universality, intrinsic aliveness detection, continuous availability, and inbuilt hidden nature. These properties enable the development of novel applications, where non-intrusive and continuous authentication are critical factors. Examples include, among others, electronic trading platforms, the gaming industry, and the auto industry, in particular for car sharing programs and fleet management solutions. However, there are still some challenges to overcome in order to make the ECG a widely accepted biometric. In particular, the questions of uniqueness (inter-subject variability) and permanence over time (intra-subject variability) are still largely unanswered. In this paper we focus on the uniqueness question, presenting a preliminary study of our biometric recognition system, testing it on a database encompassing 618 subjects. We also performed tests with subsets of this population. The results reinforce that the ECG is a viable trait for biometrics, having obtained an Equal Error Rate of 9.01% and an Error of Identification of 15.64% for the entire test population.
Resumo:
Applications involving biosignals, such as Electrocardiography (ECG), are becoming more pervasive with the extension towards non-intrusive scenarios helping targeting ambulatory healthcare monitoring, emotion assessment, among many others. In this study we introduce a new type of silver/silver chloride (Ag/AgCl) electrodes based on a paper substrate and produced using an inkjet printing technique. This type of electrodes can increase the potential applications of biosignal acquisition technologies for everyday life use, given that there are several advantages, such as cost reduction and easier recycling, resultant from the approach explored in our work. We performed a comparison study to assess the quality of this new electrode type, in which ECG data was collected with three types of Ag/AgCl electrodes: i) gelled; ii) dry iii) paper-based inkjet printed. We also compared the performance of each electrode when acquired using a professional-grade gold standard device, and a low cost platform. Experimental results showed that data acquired using our proposed inkjet printed electrode is highly correlated with data obtained through conventional electrodes. Moreover, the electrodes are robust to high-end and low-end data acquisition devices. Copyright © 2014 SCITEPRESS - Science and Technology Publications. All rights reserved.
Resumo:
Debugging electronic circuits is traditionally done with bench equipment directly connected to the circuit under debug. In the digital domain, the difficulties associated with the direct physical access to circuit nodes led to the inclusion of resources providing support to that activity, first at the printed circuit level, and then at the integrated circuit level. The experience acquired with those solutions led to the emergence of dedicated infrastructures for debugging cores at the system-on-chip level. However, all these developments had a small impact in the analog and mixed-signal domain, where debugging still depends, to a large extent, on direct physical access to circuit nodes. As a consequence, when analog and mixed-signal circuits are integrated as cores inside a system-on-chip, the difficulties associated with debugging increase, which cause the time-to-market and the prototype verification costs to also increase. The present work considers the IEEE1149.4 infrastructure as a means to support the debugging of mixed-signal circuits, namely to access the circuit nodes and also an embedded debug mechanism named mixed-signal condition detector, necessary for watch-/breakpoints and real-time analysis operations. One of the main advantages associated with the proposed solution is the seamless migration to the system-on-chip level, as the access is done through electronic means, thus easing debugging operations at different hierarchical levels.
Resumo:
A new data set of daily gridded observations of precipitation, computed from over 400 stations in Portugal, is used to assess the performance of 12 regional climate models at 25 km resolution, from the ENSEMBLES set, all forced by ERA-40 boundary conditions, for the 1961-2000 period. Standard point error statistics, calculated from grid point and basin aggregated data, and precipitation related climate indices are used to analyze the performance of the different models in representing the main spatial and temporal features of the regional climate, and its extreme events. As a whole, the ENSEMBLES models are found to achieve a good representation of those features, with good spatial correlations with observations. There is a small but relevant negative bias in precipitation, especially in the driest months, leading to systematic errors in related climate indices. The underprediction of precipitation occurs in most percentiles, although this deficiency is partially corrected at the basin level. Interestingly, some of the conclusions concerning the performance of the models are different of what has been found for the contiguous territory of Spain; in particular, ENSEMBLES models appear too dry over Portugal and too wet over Spain. Finally, models behave quite differently in the simulation of some important aspects of local climate, from the mean climatology to high precipitation regimes in localized mountain ranges and in the subsequent drier regions.
Resumo:
LHC has reported tantalizing hints for a Higgs boson of mass 125 GeV decaying into two photons. We focus on two-Higgs-doublet Models, and study the interesting possibility that the heavier scalar H has been seen, with the lightest scalar h having thus far escaped detection. Nonobservation of h at LEP severely constrains the parameter-space of two-Higgs-doublet models. We analyze cases where the decay H -> hh is kinematically allowed, and cases where it is not, in the context of type I, type II, lepton-specific, and flipped models.