862 resultados para image-based dietary records
Resumo:
An efficient representation method for arbitrarily shaped image segments is proposed. This method includes a smart way to select wavelet basis to approximate the given image segment, with improved image quality and reduced computational load.
Resumo:
zFour rumen-fistulated, multiparous Holstein-Friesian cows in early lactation were offered mixed diets based on rhodes grass hay (Chloris gayana) cv. Callide containing 13, 14, 15 or 16% crude protein (CP) on a dry matter basis, in a 4 x 4 latin square design. The estimated undegradable protein concentration in these diets was similar with rumen degradable protein concentration varying. Cows fed a diet containing 13% CP had lower (P = 0.07) milk yields than cows in other treatments (20.4 vs 21.9, 22.0 and 22.2 L/d for 13, 14, 15 and 16% CP, respectively). A positive linear relationship was found (P = 0.06) between organic matter intake and dietary CP%. There were negative linear relationships between dietary CP% and digestibilities of dry matter (P = 0.09), organic matter (P = 0.06) and neutral detergent fibre (P = 0.02). Feeding a diet containing 13% CP resulted in significantly lower (P = 0.001) molar proportions (%) of rumen valerate in comparison with other treatments. The molar proportions of isovalerate differed (P = 0.001) between treatments (0.66, 0.78, 0.89 and 1.04%) for 13, 14, 15 and 16% CP, respectively). Dietary protein level had no effect on rates of passage, in situ digestion of rhodes grass hay or ratios of allantoin: creatinine in urine. These data showed that increasing the dietary CP concentration of lactating cows fed diets based on rhodes grass hay increased intakes and not significantly improved at dietary CP concentrations above 14% DM.
Resumo:
An optically addressed read-write sensor based on two stacked p-i-n heterojunctions is analyzed. The device is a two terminal image sensing structure. The charge packets are injected optically into the p-i-n writer and confined at the illuminated regions changing locally the electrical field profile across the p-i-n reader. An optical scanner is used for charge readout. The design allows a continuous readout without the need for pixel-level patterning. The role of light pattern and scanner wavelengths on the readout parameters is analyzed. The optical-to-electrical transfer characteristics show high quantum efficiency, broad spectral response, and reciprocity between light and image signal. A numerical simulation supports the imaging process. A black and white image is acquired with a resolution around 20 mum showing the potentiality of these devices for imaging applications.
Resumo:
In recent works large area hydrogenated amorphous silicon p-i-n structures with low conductivity doped layers were proposed as single element image sensors. The working principle of this type of sensor is based on the modulation, by the local illumination conditions, of the photocurrent generated by a light beam scanning the active area of the device. In order to evaluate the sensor capabilities is necessary to perform a response time characterization. This work focuses on the transient response of such sensor and on the influence of the carbon contents of the doped layers. In order to evaluate the response time a set of devices with different percentage of carbon incorporation in the doped layers is analyzed by measuring the scanner-induced photocurrent under different bias conditions.
Resumo:
OBJECTIVE To assess the impact of consuming ultra-processed foods on the nutritional dietary profile in Brazil.METHODS Cross-sectional study conducted with data from the module on individual food consumption from the 2008-2009 Pesquisa de Orçamentos Familiares (POF – Brazilian Family Budgets Survey). The sample, which represented the section of the Brazilian population aged 10 years or over, involved 32,898 individuals. Food consumption was evaluated by two 24-hour food records. The consumed food items were classified into three groups: natural or minimally processed, including culinary preparations with these foods used as a base; processed; and ultra-processed.RESULTS The average daily energy consumption per capita was 1,866 kcal, with 69.5% being provided by natural or minimally processed foods, 9.0% by processed foods and 21.5% by ultra-processed food. The nutritional profile of the fraction of ultra-processed food consumption showed higher energy density, higher overall fat content, higher saturated and trans fat, higher levels of free sugar and less fiber, protein, sodium and potassium, when compared to the fraction of consumption related to natural or minimally processed foods. Ultra-processed foods presented generally unfavorable characteristics when compared to processed foods. Greater inclusion of ultra-processed foods in the diet resulted in a general deterioration in the dietary nutritional profile. The indicators of the nutritional dietary profile of Brazilians who consumed less ultra-processed foods, with the exception of sodium, are the stratum of the population closer to international recommendations for a healthy diet.CONCLUSIONS The results from this study highlight the damage to health that is arising based on the observed trend in Brazil of replacing traditional meals, based on natural or minimally processed foods, with ultra-processed foods. These results also support the recommendation of avoiding the consumption of these kinds of foods.
Resumo:
Conventional film based X-ray imaging systems are being replaced by their digital equivalents. Different approaches are being followed by considering direct or indirect conversion, with the later technique dominating. The typical, indirect conversion, X-ray panel detector uses a phosphor for X-ray conversion coupled to a large area array of amorphous silicon based optical sensors and a couple of switching thin film transistors (TFT). The pixel information can then be readout by switching the correspondent line and column transistors, routing the signal to an external amplifier. In this work we follow an alternative approach, where the electrical switching performed by the TFT is replaced by optical scanning using a low power laser beam and a sensing/switching PINPIN structure, thus resulting in a simpler device. The optically active device is a PINPIN array, sharing both front and back electrical contacts, deposited over a glass substrate. During X-ray exposure, each sensing side photodiode collects photons generated by the scintillator screen (560 nm), charging its internal capacitance. Subsequently a laser beam (445 nm) scans the switching diodes (back side) retrieving the stored charge in a sequential way, reconstructing the image. In this paper we present recent work on the optoelectronic characterization of the PINPIN structure to be incorporated in the X-ray image sensor. The results from the optoelectronic characterization of the device and the dependence on scanning beam parameters are presented and discussed. Preliminary results of line scans are also presented. (C) 2014 Elsevier B.V. All rights reserved.
Resumo:
Hyperspectral remote sensing exploits the electromagnetic scattering patterns of the different materials at specific wavelengths [2, 3]. Hyperspectral sensors have been developed to sample the scattered portion of the electromagnetic spectrum extending from the visible region through the near-infrared and mid-infrared, in hundreds of narrow contiguous bands [4, 5]. The number and variety of potential civilian and military applications of hyperspectral remote sensing is enormous [6, 7]. Very often, the resolution cell corresponding to a single pixel in an image contains several substances (endmembers) [4]. In this situation, the scattered energy is a mixing of the endmember spectra. A challenging task underlying many hyperspectral imagery applications is then decomposing a mixed pixel into a collection of reflectance spectra, called endmember signatures, and the corresponding abundance fractions [8–10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. Linear mixing model holds approximately when the mixing scale is macroscopic [13] and there is negligible interaction among distinct endmembers [3, 14]. If, however, the mixing scale is microscopic (or intimate mixtures) [15, 16] and the incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [17], the linear model is no longer accurate. Linear spectral unmixing has been intensively researched in the last years [9, 10, 12, 18–21]. It considers that a mixed pixel is a linear combination of endmember signatures weighted by the correspondent abundance fractions. Under this model, and assuming that the number of substances and their reflectance spectra are known, hyperspectral unmixing is a linear problem for which many solutions have been proposed (e.g., maximum likelihood estimation [8], spectral signature matching [22], spectral angle mapper [23], subspace projection methods [24,25], and constrained least squares [26]). In most cases, the number of substances and their reflectances are not known and, then, hyperspectral unmixing falls into the class of blind source separation problems [27]. Independent component analysis (ICA) has recently been proposed as a tool to blindly unmix hyperspectral data [28–31]. ICA is based on the assumption of mutually independent sources (abundance fractions), which is not the case of hyperspectral data, since the sum of abundance fractions is constant, implying statistical dependence among them. This dependence compromises ICA applicability to hyperspectral images as shown in Refs. [21, 32]. In fact, ICA finds the endmember signatures by multiplying the spectral vectors with an unmixing matrix, which minimizes the mutual information among sources. If sources are independent, ICA provides the correct unmixing, since the minimum of the mutual information is obtained only when sources are independent. This is no longer true for dependent abundance fractions. Nevertheless, some endmembers may be approximately unmixed. These aspects are addressed in Ref. [33]. Under the linear mixing model, the observations from a scene are in a simplex whose vertices correspond to the endmembers. Several approaches [34–36] have exploited this geometric feature of hyperspectral mixtures [35]. Minimum volume transform (MVT) algorithm [36] determines the simplex of minimum volume containing the data. The method presented in Ref. [37] is also of MVT type but, by introducing the notion of bundles, it takes into account the endmember variability usually present in hyperspectral mixtures. The MVT type approaches are complex from the computational point of view. Usually, these algorithms find in the first place the convex hull defined by the observed data and then fit a minimum volume simplex to it. For example, the gift wrapping algorithm [38] computes the convex hull of n data points in a d-dimensional space with a computational complexity of O(nbd=2cþ1), where bxc is the highest integer lower or equal than x and n is the number of samples. The complexity of the method presented in Ref. [37] is even higher, since the temperature of the simulated annealing algorithm used shall follow a log( ) law [39] to assure convergence (in probability) to the desired solution. Aiming at a lower computational complexity, some algorithms such as the pixel purity index (PPI) [35] and the N-FINDR [40] still find the minimum volume simplex containing the data cloud, but they assume the presence of at least one pure pixel of each endmember in the data. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. PPI algorithm uses the minimum noise fraction (MNF) [41] as a preprocessing step to reduce dimensionality and to improve the signal-to-noise ratio (SNR). The algorithm then projects every spectral vector onto skewers (large number of random vectors) [35, 42,43]. The points corresponding to extremes, for each skewer direction, are stored. A cumulative account records the number of times each pixel (i.e., a given spectral vector) is found to be an extreme. The pixels with the highest scores are the purest ones. N-FINDR algorithm [40] is based on the fact that in p spectral dimensions, the p-volume defined by a simplex formed by the purest pixels is larger than any other volume defined by any other combination of pixels. This algorithm finds the set of pixels defining the largest volume by inflating a simplex inside the data. ORA SIS [44, 45] is a hyperspectral framework developed by the U.S. Naval Research Laboratory consisting of several algorithms organized in six modules: exemplar selector, adaptative learner, demixer, knowledge base or spectral library, and spatial postrocessor. The first step consists in flat-fielding the spectra. Next, the exemplar selection module is used to select spectral vectors that best represent the smaller convex cone containing the data. The other pixels are rejected when the spectral angle distance (SAD) is less than a given thresh old. The procedure finds the basis for a subspace of a lower dimension using a modified Gram–Schmidt orthogonalizati on. The selected vectors are then projected onto this subspace and a simplex is found by an MV T pro cess. ORA SIS is oriented to real-time target detection from uncrewed air vehicles using hyperspectral data [46]. In this chapter we develop a new algorithm to unmix linear mixtures of endmember spectra. First, the algorithm determines the number of endmembers and the signal subspace using a newly developed concept [47, 48]. Second, the algorithm extracts the most pure pixels present in the data. Unlike other methods, this algorithm is completely automatic and unsupervised. To estimate the number of endmembers and the signal subspace in hyperspectral linear mixtures, the proposed scheme begins by estimating sign al and noise correlation matrices. The latter is based on multiple regression theory. The signal subspace is then identified by selectin g the set of signal eigenvalue s that best represents the data, in the least-square sense [48,49 ], we note, however, that VCA works with projected and with unprojected data. The extraction of the end members exploits two facts: (1) the endmembers are the vertices of a simplex and (2) the affine transformation of a simplex is also a simplex. As PPI and N-FIND R algorithms, VCA also assumes the presence of pure pixels in the data. The algorithm iteratively projects data on to a direction orthogonal to the subspace spanned by the endmembers already determined. The new end member signature corresponds to the extreme of the projection. The algorithm iterates until all end members are exhausted. VCA performs much better than PPI and better than or comparable to N-FI NDR; yet it has a computational complexity between on e and two orders of magnitude lower than N-FINDR. The chapter is structure d as follows. Section 19.2 describes the fundamentals of the proposed method. Section 19.3 and Section 19.4 evaluate the proposed algorithm using simulated and real data, respectively. Section 19.5 presents some concluding remarks.
Resumo:
Astringency is an organoleptic property of beverages and food products resulting mainly from the interaction of salivary proteins with dietary polyphenols. It is of great importance to consumers, but the only effective way of measuring it involves trained sensorial panellists, providing subjective and expensive responses. Concurrent chemical evaluations try to screen food astringency, by means of polyphenol and protein precipitation procedures, but these are far from the real human astringency sensation where not all polyphenol–protein interactions lead to the occurrence of precipitate. Here, a novel chemical approach that tries to mimic protein–polyphenol interactions in the mouth is presented to evaluate astringency. A protein, acting as a salivary protein, is attached to a solid support to which the polyphenol binds (just as happens when drinking wine), with subsequent colour alteration that is fully independent from the occurrence of precipitate. Employing this simple concept, Bovine Serum Albumin (BSA) was selected as the model salivary protein and used to cover the surface of silica beads. Tannic Acid (TA), employed as the model polyphenol, was allowed to interact with the BSA on the silica support and its adsorption to the protein was detected by reaction with Fe(III) and subsequent colour development. Quantitative data of TA in the samples were extracted by colorimetric or reflectance studies over the solid materials. The analysis was done by taking a regular picture with a digital camera, opening the image file in common software and extracting the colour coordinates from HSL (Hue, Saturation, Lightness) and RGB (Red, Green, Blue) colour model systems; linear ranges were observed from 10.6 to 106.0 μmol L−1. The latter was based on the Kubelka–Munk response, showing a linear gain with concentrations from 0.3 to 10.5 μmol L−1. In either of these two approaches, semi-quantitative estimation of TA was enabled by direct eye comparison. The correlation between the levels of adsorbed TA and the astringency of beverages was tested by using the assay to check the astringency of wines and comparing these to the response of sensorial panellists. Results of the two methods correlated well. The proposed sensor has significant potential as a robust tool for the quantitative/semi-quantitative evaluation of astringency in wine.
Resumo:
Dissertação apresentada para obtenção do Grau de Mestre em Engenharia Informática pela Universidade Nova de Lisboa, Faculdade de Ciências e Tecnologia
Resumo:
Magdeburg, Univ., Fak. für Elektrotechnik und Informationstechnik, Diss., 2013
Resumo:
RATIONALE AND OBJECTIVES: Dose reduction may compromise patients because of a decrease of image quality. Therefore, the amount of dose savings in new dose-reduction techniques needs to be thoroughly assessed. To avoid repeated studies in one patient, chest computed tomography (CT) scans with different dose levels were performed in corpses comparing model-based iterative reconstruction (MBIR) as a tool to enhance image quality with current standard full-dose imaging. MATERIALS AND METHODS: Twenty-five human cadavers were scanned (CT HD750) after contrast medium injection at different, decreasing dose levels D0-D5 and respectively reconstructed with MBIR. The data at full-dose level, D0, have been additionally reconstructed with standard adaptive statistical iterative reconstruction (ASIR), which represented the full-dose baseline reference (FDBR). Two radiologists independently compared image quality (IQ) in 3-mm multiplanar reformations for soft-tissue evaluation of D0-D5 to FDBR (-2, diagnostically inferior; -1, inferior; 0, equal; +1, superior; and +2, diagnostically superior). For statistical analysis, the intraclass correlation coefficient (ICC) and the Wilcoxon test were used. RESULTS: Mean CT dose index values (mGy) were as follows: D0/FDBR = 10.1 ± 1.7, D1 = 6.2 ± 2.8, D2 = 5.7 ± 2.7, D3 = 3.5 ± 1.9, D4 = 1.8 ± 1.0, and D5 = 0.9 ± 0.5. Mean IQ ratings were as follows: D0 = +1.8 ± 0.2, D1 = +1.5 ± 0.3, D2 = +1.1 ± 0.3, D3 = +0.7 ± 0.5, D4 = +0.1 ± 0.5, and D5 = -1.2 ± 0.5. All values demonstrated a significant difference to baseline (P < .05), except mean IQ for D4 (P = .61). ICC was 0.91. CONCLUSIONS: Compared to ASIR, MBIR allowed for a significant dose reduction of 82% without impairment of IQ. This resulted in a calculated mean effective dose below 1 mSv.
Resumo:
The trabecular bone score (TBS) is a gray-level textural metric that can be extracted from the two-dimensional lumbar spine dual-energy X-ray absorptiometry (DXA) image. TBS is related to bone microarchitecture and provides skeletal information that is not captured from the standard bone mineral density (BMD) measurement. Based on experimental variograms of the projected DXA image, TBS has the potential to discern differences between DXA scans that show similar BMD measurements. An elevated TBS value correlates with better skeletal microstructure; a low TBS value correlates with weaker skeletal microstructure. Lumbar spine TBS has been evaluated in cross-sectional and longitudinal studies. The following conclusions are based upon publications reviewed in this article: 1) TBS gives lower values in postmenopausal women and in men with previous fragility fractures than their nonfractured counterparts; 2) TBS is complementary to data available by lumbar spine DXA measurements; 3) TBS results are lower in women who have sustained a fragility fracture but in whom DXA does not indicate osteoporosis or even osteopenia; 4) TBS predicts fracture risk as well as lumbar spine BMD measurements in postmenopausal women; 5) efficacious therapies for osteoporosis differ in the extent to which they influence the TBS; 6) TBS is associated with fracture risk in individuals with conditions related to reduced bone mass or bone quality. Based on these data, lumbar spine TBS holds promise as an emerging technology that could well become a valuable clinical tool in the diagnosis of osteoporosis and in fracture risk assessment. © 2014 American Society for Bone and Mineral Research.
Resumo:
1. The importance of dietary lipids for carotenoid-based ornaments has rarely been investigated, although theory predicts that dietary lipids may control the development of these widespread animal signals. Dietary lipids have been suggested to enhance the expression of male carotenoid-based ornaments because they provide carotenoids with a hydrophobic domain that facilitates their absorption and transport. Dietary lipids may also enhance the uptake of tocopherols (vitamin E), which share common absorption and transport routes with carotenoids. Here, we test whether dietary lipids enhance carotenoid availability and male carotenoid-based colorations. We also explore the effects of dietary lipids on plasma tocopherol concentration, which allow disentangling between different pathways that may explain how dietary lipids affect ornamental expression. 2. Following a two-factorial design, we manipulated dietary access of naturally occurring fatty acids (oleic acid) and carotenoids (lutein and zeaxanthin) and measured its effects on the circulating concentrations of carotenoids (lutein and zeaxanthin) and vitamin E (α- and γ-(β-) tocopherols) and on the ventral, carotenoid-based coloration of male common lizards (Lacerta vivipara). 3. Lutein but not zeaxanthin plasma concentrations increased with carotenoid supplementation, which, however, did not affect coloration. Lipid intake negatively affected circulating concentrations of lutein and γ-(β-) tocopherol and led to significantly less orange colorations. The path analysis suggests that a relationship between the observed colour change and the change in plasma concentrations of γ-(β-) tocopherol may exist. 4. Our study shows for the first time that dietary lipids do not enhance but reduce the intensity of male carotenoid-based ornaments. Although dietary lipids affected plasma carotenoid concentration, its negative effect on coloration appeared to be linked to lower vitamin E plasma concentrations. These findings suggest that a conflict between dietary lipids and carotenoid and tocopherol uptake may arise if these nutrients are independently obtained from natural diets and that such conflict may reinforce signal honesty in carotenoid-based ornaments. They also suggest that, at least in the common lizard, sexual selection with respect to carotenoid-based coloration may select for males with low antioxidant capacity and thus for males of superior health.
Resumo:
Image registration is an important component of image analysis used to align two or more images. In this paper, we present a new framework for image registration based on compression. The basic idea underlying our approach is the conjecture that two images are correctly registered when we can maximally compress one image given the information in the other. The contribution of this paper is twofold. First, we show that the image registration process can be dealt with from the perspective of a compression problem. Second, we demonstrate that the similarity metric, introduced by Li et al., performs well in image registration. Two different versions of the similarity metric have been used: the Kolmogorov version, computed using standard real-world compressors, and the Shannon version, calculated from an estimation of the entropy rate of the images
Resumo:
One of the key aspects in 3D-image registration is the computation of the joint intensity histogram. We propose a new approach to compute this histogram using uniformly distributed random lines to sample stochastically the overlapping volume between two 3D-images. The intensity values are captured from the lines at evenly spaced positions, taking an initial random offset different for each line. This method provides us with an accurate, robust and fast mutual information-based registration. The interpolation effects are drastically reduced, due to the stochastic nature of the line generation, and the alignment process is also accelerated. The results obtained show a better performance of the introduced method than the classic computation of the joint histogram