986 resultados para reference value


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The effect of peel and seed removal, two commonly practiced procedures either at home or by the processing industry, on the physicochemical properties, bioactive compounds contents and antioxidant capacity of tomato fruits of four typical Portuguese cultivars (cereja, chucha, rama and redondo) were appraised. Both procedures caused significant nutritional and antioxidant activity losses in fruits of every cultivar. In general, peeling was more detrimental, since it caused a higher decrease in lycopene, bcarotene, ascorbic acid and phenolics contents (averages of 71%, 50%, 14%, and 32%, respectively) and significantly lowered the antioxidant capacity of the fruits (8% and 10%, using DPPH. and b-carotene linoleate model assays, correspondingly). Although seeds removal favored the increase of both color and sweetness, some bioactive compounds (11% of carotenoids and 24% of phenolics) as well as antioxidant capacity (5%) were loss. The studied cultivars were differently influenced by these procedures. The fruits most affected by peeling were those from redondo cultivar (-66% lycopene, -44% b-carotene, -26% ascorbic acid and -38% phenolics). Seeds removal, in turn, was more injurious for cereja tomatoes (-10% lycopene, -38% b-carotene, -25% ascorbic acid and -63% phenolics). Comparatively with the remaining ones, the rama fruits were less affected by the trimming procedures.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The mineral content (phosphorous (P), potassium (K), sodium (Na), calcium (Ca), magnesium (Mg), iron (Fe), manganese (Mn), zinc (Zn) and copper (Cu)) of eight ready-to-eat baby leaf vegetables was determined. The samples were subjected to microwave-assisted digestion and the minerals were quantified by High-Resolution Continuum Source Atomic Absorption Spectrometry (HR-CS-AAS) with flame and electrothermal atomisation. The methods were optimised and validated producing low LOQs, good repeatability and linearity, and recoveries, ranging from 91% to 110% for the minerals analysed. Phosphorous was determined by a standard colorimetric method. The accuracy of the method was checked by analysing a certified reference material; results were in agreement with the quantified value. The samples had a high content of potassium and calcium, but the principal mineral was iron. The mineral content was stable during storage and baby leaf vegetables could represent a good source of minerals in a balanced diet. A linear discriminant analysis was performed to compare the mineral profile obtained and showed, as expected, that the mineral content was similar between samples from the same family. The Linear Discriminant Analysis was able to discriminate different samples based on their mineral profile.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Pliensbachian/Toarcian boundary (Lower Jurassic) is well represented in the Lusitanian Basin (Portugal), mainly in the Peniche area, recorded by a marl/limestone series. Calcareous nannofossil assemblages are described herein, with the aim to contribute to the Toarcian GSSP definition. Marly samples were collected 3 m below and 7 m above this boundary and analysed for calcareous nannofossils. The main nannofossils observed were Biscutum finchii, B. grande, Calcivascularis jansae, Crepidolithus crassus, C. granulatus, C. impontus, Lotharingius hauffii, L. sigillatus, L. aff. L. velatus, Schizosphaerella spp. and Tubirhabdus patulus. This assemblage indicates that the Pliensbachian/Toarcian boundary in Peniche lies in the upper part of the NJ5b Subzone. Schizosphaerella and Lotharingius dominate the assemblage. The abundant occurrence of C. jansae and the common occurrence of B. grande indicate a strong Tethyan influence.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Trabalho Final de Mestrado para obtenção do grau de Mestre em Engenharia Civil na Área de Especialização em Estruturas

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The development of high spatial resolution airborne and spaceborne sensors has improved the capability of ground-based data collection in the fields of agriculture, geography, geology, mineral identification, detection [2, 3], and classification [4–8]. The signal read by the sensor from a given spatial element of resolution and at a given spectral band is a mixing of components originated by the constituent substances, termed endmembers, located at that element of resolution. This chapter addresses hyperspectral unmixing, which is the decomposition of the pixel spectra into a collection of constituent spectra, or spectral signatures, and their corresponding fractional abundances indicating the proportion of each endmember present in the pixel [9, 10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. The linear mixing model holds when the mixing scale is macroscopic [13]. The nonlinear model holds when the mixing scale is microscopic (i.e., intimate mixtures) [14, 15]. The linear model assumes negligible interaction among distinct endmembers [16, 17]. The nonlinear model assumes that incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [18]. Under the linear mixing model and assuming that the number of endmembers and their spectral signatures are known, hyperspectral unmixing is a linear problem, which can be addressed, for example, under the maximum likelihood setup [19], the constrained least-squares approach [20], the spectral signature matching [21], the spectral angle mapper [22], and the subspace projection methods [20, 23, 24]. Orthogonal subspace projection [23] reduces the data dimensionality, suppresses undesired spectral signatures, and detects the presence of a spectral signature of interest. The basic concept is to project each pixel onto a subspace that is orthogonal to the undesired signatures. As shown in Settle [19], the orthogonal subspace projection technique is equivalent to the maximum likelihood estimator. This projection technique was extended by three unconstrained least-squares approaches [24] (signature space orthogonal projection, oblique subspace projection, target signature space orthogonal projection). Other works using maximum a posteriori probability (MAP) framework [25] and projection pursuit [26, 27] have also been applied to hyperspectral data. In most cases the number of endmembers and their signatures are not known. Independent component analysis (ICA) is an unsupervised source separation process that has been applied with success to blind source separation, to feature extraction, and to unsupervised recognition [28, 29]. ICA consists in finding a linear decomposition of observed data yielding statistically independent components. Given that hyperspectral data are, in given circumstances, linear mixtures, ICA comes to mind as a possible tool to unmix this class of data. In fact, the application of ICA to hyperspectral data has been proposed in reference 30, where endmember signatures are treated as sources and the mixing matrix is composed by the abundance fractions, and in references 9, 25, and 31–38, where sources are the abundance fractions of each endmember. In the first approach, we face two problems: (1) The number of samples are limited to the number of channels and (2) the process of pixel selection, playing the role of mixed sources, is not straightforward. In the second approach, ICA is based on the assumption of mutually independent sources, which is not the case of hyperspectral data, since the sum of the abundance fractions is constant, implying dependence among abundances. This dependence compromises ICA applicability to hyperspectral images. In addition, hyperspectral data are immersed in noise, which degrades the ICA performance. IFA [39] was introduced as a method for recovering independent hidden sources from their observed noisy mixtures. IFA implements two steps. First, source densities and noise covariance are estimated from the observed data by maximum likelihood. Second, sources are reconstructed by an optimal nonlinear estimator. Although IFA is a well-suited technique to unmix independent sources under noisy observations, the dependence among abundance fractions in hyperspectral imagery compromises, as in the ICA case, the IFA performance. Considering the linear mixing model, hyperspectral observations are in a simplex whose vertices correspond to the endmembers. Several approaches [40–43] have exploited this geometric feature of hyperspectral mixtures [42]. Minimum volume transform (MVT) algorithm [43] determines the simplex of minimum volume containing the data. The MVT-type approaches are complex from the computational point of view. Usually, these algorithms first find the convex hull defined by the observed data and then fit a minimum volume simplex to it. Aiming at a lower computational complexity, some algorithms such as the vertex component analysis (VCA) [44], the pixel purity index (PPI) [42], and the N-FINDR [45] still find the minimum volume simplex containing the data cloud, but they assume the presence in the data of at least one pure pixel of each endmember. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. Hyperspectral sensors collects spatial images over many narrow contiguous bands, yielding large amounts of data. For this reason, very often, the processing of hyperspectral data, included unmixing, is preceded by a dimensionality reduction step to reduce computational complexity and to improve the signal-to-noise ratio (SNR). Principal component analysis (PCA) [46], maximum noise fraction (MNF) [47], and singular value decomposition (SVD) [48] are three well-known projection techniques widely used in remote sensing in general and in unmixing in particular. The newly introduced method [49] exploits the structure of hyperspectral mixtures, namely the fact that spectral vectors are nonnegative. The computational complexity associated with these techniques is an obstacle to real-time implementations. To overcome this problem, band selection [50] and non-statistical [51] algorithms have been introduced. This chapter addresses hyperspectral data source dependence and its impact on ICA and IFA performances. The study consider simulated and real data and is based on mutual information minimization. Hyperspectral observations are described by a generative model. This model takes into account the degradation mechanisms normally found in hyperspectral applications—namely, signature variability [52–54], abundance constraints, topography modulation, and system noise. The computation of mutual information is based on fitting mixtures of Gaussians (MOG) to data. The MOG parameters (number of components, means, covariances, and weights) are inferred using the minimum description length (MDL) based algorithm [55]. We study the behavior of the mutual information as a function of the unmixing matrix. The conclusion is that the unmixing matrix minimizing the mutual information might be very far from the true one. Nevertheless, some abundance fractions might be well separated, mainly in the presence of strong signature variability, a large number of endmembers, and high SNR. We end this chapter by sketching a new methodology to blindly unmix hyperspectral data, where abundance fractions are modeled as a mixture of Dirichlet sources. This model enforces positivity and constant sum sources (full additivity) constraints. The mixing matrix is inferred by an expectation-maximization (EM)-type algorithm. This approach is in the vein of references 39 and 56, replacing independent sources represented by MOG with mixture of Dirichlet sources. Compared with the geometric-based approaches, the advantage of this model is that there is no need to have pure pixels in the observations. The chapter is organized as follows. Section 6.2 presents a spectral radiance model and formulates the spectral unmixing as a linear problem accounting for abundance constraints, signature variability, topography modulation, and system noise. Section 6.3 presents a brief resume of ICA and IFA algorithms. Section 6.4 illustrates the performance of IFA and of some well-known ICA algorithms with experimental data. Section 6.5 studies the ICA and IFA limitations in unmixing hyperspectral data. Section 6.6 presents results of ICA based on real data. Section 6.7 describes the new blind unmixing scheme and some illustrative examples. Section 6.8 concludes with some remarks.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Mestrado em Engenharia Química - Ramo Optimização Energética na Indústria Química

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In order to evaluate the role of the determination of adenosine deaminase activity (ADA) in ascitic fluid for the diagnosis of tuberculosis, 44 patients were studied. Based on biochemical, cytological, histopathological and microbiological tests, the patients were divided into 5 groups: G1 - tuberculous ascites (n = 8); G2 - malignant ascites (n = 13); G3 - spontaneous bacterial peritonitis (n = 6); G4 - pancreatic ascites (n = 2); G5 - miscelaneous ascites (n = 15). ADA concentration were significantly higher in G1 (133.50 ± 24.74 U/l) compared to the other groups (G2 = 41.85 ± 52.07 U/l; G3 = 10.63 ± 5.87 U/l; G4 = 18.00 ± 7.07 U/l; G5 = 11.23 ± 7.66 U/l). At a cut-off value of >31 U/l, the sensitivity, specificity and positive and negative preditive values were 100%, 92%, 72% and 100%, respectively. ADA concentrations as high as in tuberculous ascites were only found in two malignant ascites caused by lymphoma. We conclude that ADA determination in ascitic fluid is a useful and reliable screening test for diagnosing tuberculous ascites. Values of ADA higher than 31 U/l indicate more invasive methods to confirm the diagnosis of tuberculosis.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Tendo por referência a diretiva 2006/95/CE, o trabalho desenvolvido no contexto da disciplina de Dissertação/Projeto/Estágio do Mestrado de Engenharia de Instrumentação e Metrologia, decorreu nas instalações do IEP (Instituto Electrotécnico Português) e teve como objetivo principal o desenvolvimento de um procedimento de avaliação dos efeitos fotobiológicos no olho e pele provocados por fontes de emissão contínua (LED), doravante designado método alternativo ao de referência. Os dois métodos, alternativo e de referência, utilizam respectivamente um foto-radiómetro multicanal e um espetro-radiómetro. O procedimento desenvolvido (método alternativo) de acordo com a norma EN/IEC62471) consiste na aquisição dos valores de irradiância com recurso a um foto-radiómetro e posterior determinação dos valores da radiância, com os quais se faz a avaliação dos efeitos fotobiológicos, para fontes de luz LED (Light Emitting Diode) ou GLS (General Lighting Service). A consulta detalhada da norma EN/IEC62471 e a pesquisa sobre os conceitos, definições, equipamentos e metodologias relacionadas com o tema em causa, constituiu o primeiro passo deste projecto. Com recurso aos dois equipamentos, uma fonte de luz LED (módulo de 12 lâmpadas LED) é avaliada em relação aos perigos (ou riscos) actínico UV e UV-A, ao perigo da luz azul e ainda o perigo térmico na retina e térmico na pele, permitindo fazer uma análise comparativa dos resultados. O método alternativo revelou-se bastante flexível e eficaz, proporcionando bons resultados em termos da irradiância e radiância dos referidos efeitos fotobiológicos. A comparação destes resultados com os valores limites de exposição mencionados na norma EN/IEC6247 permitiu afirmar que a fonte de luz LED avaliada não representa perigo fotobiológico para a saúde humana e classifica-se no grupo de risco “isento”. Uma vez cumpridos os objectivos, entendeu-se que seria uma mais-valia para o trabalho já realizado, estudar outro caso prático. Sendo assim, fez-se a avaliação da radiação de apenas um dos LED´s que constituíam a fonte usada nos ensaios anteriores, com o espetro-radiómetro (método de referência) e com uma distância de 200 mm entre a fonte e o medidor. Neste caso verificaram-se diferenças significativas nas quantidades obtidas quando comparadas com os valores normativos. Concluiu-se que o efeito fotobiológico da luz azul insere-se no grupo de “isento”, sem perigo para a saúde. Contudo, o efeito térmico da retina apresenta um aumento considerável da quantidade de radiância, embora dentro do grupo de risco “isento”. Esta classificação de grupos de risco. Face aos resultados obtidos, pode confirmar-se que as lâmpadas LED apresentam segurança fotobiológica, atendendo aos baixos valores de irradiância e radiância dos efeitos fotobiológicos estudados. Pode ainda afirmar-se que a utilização do foto-radiómetro em alternativa ao espetro-radiómetro se revela mais eficaz do ponto de vista de metodologia prática. Este trabalho demonstra a robustez desses dois equipamentos de avaliação dos efeitos fotobiológicos, e procura estabelecer uma linha de orientação para a prevenção dos efeitos adversos na pele e olhos de todos os seres humanos sujeitos à radiação ótica artificial. Quanto às incertezas de medições, em relação ao processo de medição com foto-radiómetro, a sua estimação não se realizou, devido a não rastreabilidade entre as medições indicadas pelo fabricante, no certificado de calibração e as medidas realizadas por outras entidades. Contudo, é propõe-se a sua realização em trabalhos futuros dentro desse âmbito. As incertezas dos resultados de medições com espetro-radiómetro foram parcialmente estimadas. Atendendo às potencialidades do sistema de medição, propõe-se como trabalho futuro, a aplicação da norma IEC62478, que faz parte da aplicação da norma EN/IEC62471 na avaliação do efeito da luz azul, com base na determinação da temperatura de cor correlacionada (CCT) de lâmpadas ou sistemas de lâmpadas incluindo luminárias. Os valores de irradiância e radiância adquiridos nos processos de avaliação, tanto com foto-radiómetro como espectro-radiómetro foram gravados em ficheiro Excel para um CD e anexados a este trabalho.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

M. tuberculosis-positive cultures were obtained from 228 patients seen in our service and drug sensitivity assays were carried out from January 1992 to December 1994. A survey of the medical records of these patients showed resistance to one or more drugs in 47 (20.6%), 25 of whom (10.9%), who reported previous treatment, were considered to have acquired resistance. Among the antecedents investigated, only previous treatment and alcoholism were the factors independently associated with the occurrence of resistance. The survival of patients with resistant strains was lower than that of patients attacked by non-resistant M. tuberculosis. We conclude that in the present series M. tuberculosis resistance to tuberculostatic agents was predominantly of the acquired type.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This study aims to analyze which determinants predict frailty in general and each frailty domain (physical, psychological, and social), considering the integral conceptual model of frailty, and particularly to examine the contribution of medication in this prediction. A cross-sectional study was designed using a non-probabilistic sample of 252 community-dwelling elderly from three Portuguese cities. Frailty and determinants of frailty were assessed with the Tilburg Frailty Indicator. The amount and type of different daily-consumed medication were also examined. Hierarchical regression analysis were conducted. The mean age of the participants was 79.2 years (±7.3), and most of them were women (75.8%), widowed (55.6%) and with a low educational level (0–4 years: 63.9%). In this study, determinants explained 46% of the variance of total frailty, and 39.8, 25.3, and 27.7% of physical, psychological, and social frailty respectively. Age, gender, income, death of a loved one in the past year, lifestyle, satisfaction with living environment and self-reported comorbidity predicted total frailty, while each frailty domain was associated with a different set of determinants. The number of daily-consumed drugs was independently associated with physical frailty, and the consumption of medication for the cardiovascular system and for the blood and blood-forming organs explained part of the variance of total and physical frailty. The adverse effects of polymedication and its direct link with the level of comorbidities could explain the independent contribution of the amount of prescribed drugs to frailty prediction. On the other hand, findings in regard to medication type provide further evidence of the association of frailty with cardiovascular risk. In the present study, a significant part of frailty was predicted, and the different contributions of each determinant to frailty domains highlight the relevance of the integral model of frailty. The added value of a simple assessment of medication was considerable, and it should be taken into account for effective identification of frailty.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents a modified Particle Swarm Optimization (PSO) methodology to solve the problem of energy resources management with high penetration of distributed generation and Electric Vehicles (EVs) with gridable capability (V2G). The objective of the day-ahead scheduling problem in this work is to minimize operation costs, namely energy costs, regarding the management of these resources in the smart grid context. The modifications applied to the PSO aimed to improve its adequacy to solve the mentioned problem. The proposed Application Specific Modified Particle Swarm Optimization (ASMPSO) includes an intelligent mechanism to adjust velocity limits during the search process, as well as self-parameterization of PSO parameters making it more user-independent. It presents better robustness and convergence characteristics compared with the tested PSO variants as well as better constraint handling. This enables its use for addressing real world large-scale problems in much shorter times than the deterministic methods, providing system operators with adequate decision support and achieving efficient resource scheduling, even when a significant number of alternative scenarios should be considered. The paper includes two realistic case studies with different penetration of gridable vehicles (1000 and 2000). The proposed methodology is about 2600 times faster than Mixed-Integer Non-Linear Programming (MINLP) reference technique, reducing the time required from 25 h to 36 s for the scenario with 2000 vehicles, with about one percent of difference in the objective function cost value.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Dissertação apresentada para a obtenção do Grau de Doutor em Informática

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Dissertation presented at Faculdade de Ciências e Tecnologia of Universidade Nova de Lisboa to obtain the Master degree in Electrical Engineering and Computer Science

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Toxocariasis is caused by infection of man by Toxocara canis and Toxocara cati larvae, the common roundworm of dogs and cats. Because larvae are difficult to detect in tissues, diagnosis is mostly based on serology. Non specific reactions are observed mainly due to cross-reactivity with Ascaris sp antigens. This investigation aimed at developing and evaluating an indirect antibody competition ELISA (IACE) employing a specific rabbit IgG anti-Toxocara canis excretory-secretory antigens as the competition antibody, in order to improve indirect ELISA specificity performed for toxocariasis diagnosis. For that, the rabbit IgG was previously absorbed by Ascaris suum adult antigens. Sensitivity and specificity of IACE were first evaluated in 28 serum samples of mice experimentally infected with T. canis embryonated eggs. Adopting cut-off value established in this population before infection, sensitivity and specificity were 100% after 20 days post-inoculation. For human population IACE was evaluated using sera from 440 patients with clinical signs of toxocariasis and the cut-off value was established with 60 serum samples from apparently healthy individuals. Using as reference test the indirect ELISA performed by Adolfo Lutz Institute, sensitivity was 60.2%, specificity was 98% and concordance was 77.3%. Repeatability of IACE was evaluated by the inter-reactions variation coefficient (2.4%).