996 resultados para Maximum Cumulative Ratio
Resumo:
Microencapsulation of lemon oil was undertaken with beta-cyclodextrin using a precipitation method at the five lemon oil to beta-cyclodextrin ratios of 3:97, 6:94, 9:91, 12:88, and 15:85 (w/w) in order to determine the effect of the ratio of lemon oil to beta-cyclodextrin on the inclusion efficiency of beta-cyclodextrin for encapsulating oil volatiles. The retention of lemon oil volatiles reached a maximum at the lemon oil to beta-cyclodextrin ratio of 6:94; however, the maximum inclusion capacity of beta-cyclodextrin and a maximum powder recovery were achieved at the ratio of 12:88, in which the beta-cyclodextrin complex contained 9.68% (w/w) lemon oil. The profile and proportion of selected flavor compounds in the beta-cyclodextrin complex and the starting lemon oil were not significantly different.
Resumo:
This study evaluated the effects of a micro cycle of overload training (1st-8th day) on metabolic and hormonal responses in male runners with or without carbohydrate supplementation and investigated the cumulative effects of this period on a session of intermittent high-intensity running and maximum-performance-test (9th day). The participants were 24 male runners divided into two groups, receiving 61% of their energy intake as CHO (carbohydrate-group) and 54% in the control-group (CON). The testosterone was higher for the CHO than the CON group after the overload training (694.0 +/- A 54.6 vs. CON 610.8 +/- A 47.9 pmol/l). On the ninth day participants performed 10 x 800 m at mean 3 km velocity. An all-out 1000 m running was performed before and after the 10 x 800 m. Before, during, and after this protocol, the runners received solution containing CHO or the CON equivalent. The performance on 800 m series did not differ in either group between the first and last series of 800 m, but for the all-out 1000 m test the performance decrement was lower for CHO group (5.3 +/- A 1.0 vs. 10.6 +/- A 1.3%). The cortisol concentrations were lower in the CHO group in relation to CON group (22.4 +/- A 0.9 vs. 27.6 +/- A 1.4 pmol/l) and the IGF1/IGFBP3 ratio increased 12.7% in the CHO group. During recovery, blood glucose concentrations remained higher in the CHO group in comparison with the CON group. It was concluded that CHO supplementation possibly attenuated the suppression of the hypothalamic-pituitary-gonadal axis and resulted in less catabolic stress, and thus improved running performance.
Resumo:
Objective. To investigate the contributions of BisGMA:TEGDMA and filler content on polymerization stress, along with the influence of variables associated with stress development, namely, degree of conversion, reaction rate, shrinkage, elastic modulus and loss tangent for a series of experimental dental composites. Methods. Twenty formulations with BisGMA: TEGDMA ratios of 3: 7, 4: 6, 5: 5, 6: 4 and 7: 3 and barium glass filler levels of 40, 50, 60 or 70 wt% were studied. Polymerization stress was determined in a tensilometer, inserting the composite between acrylic rods fixed to clamps of a universal test machine and dividing the maximum load recorded by the rods cross-sectional area. Conversion and reaction rate were determined by infra-red spectroscopy. Shrinkage was measured by mercury dilatometer. Modulus was obtained by three-point bending. Loss tangent was determined by dynamic nanoindentation. Regression analyses were performed to estimate the effect of organic and inorganic contents on each studied variable, while a stepwise forward regression identified significant variables for polymerization stress. Results. All variables showed dependence on inorganic concentration and monomeric content. The resin matrix showed a stronger influence on polymerization stress, conversion and reaction rate, whereas filler fraction showed a stronger influence on shrinkage, modulus and loss tangent. Shrinkage and conversion were significantly related to polymerization stress. Significance. Both the inorganic filler concentration and monomeric content affect polymerization stress, but the stronger influence of the resin matrix suggests that it may be possible to reduce stress by modifying resin composition without sacrificing filler content. The main challenge is to develop formulations with low shrinkage without sacrificing degree of conversion. (C) 2011 Academy of Dental Materials. Published by Elsevier Ltd. All rights reserved.
Resumo:
Hyperspectral remote sensing exploits the electromagnetic scattering patterns of the different materials at specific wavelengths [2, 3]. Hyperspectral sensors have been developed to sample the scattered portion of the electromagnetic spectrum extending from the visible region through the near-infrared and mid-infrared, in hundreds of narrow contiguous bands [4, 5]. The number and variety of potential civilian and military applications of hyperspectral remote sensing is enormous [6, 7]. Very often, the resolution cell corresponding to a single pixel in an image contains several substances (endmembers) [4]. In this situation, the scattered energy is a mixing of the endmember spectra. A challenging task underlying many hyperspectral imagery applications is then decomposing a mixed pixel into a collection of reflectance spectra, called endmember signatures, and the corresponding abundance fractions [8–10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. Linear mixing model holds approximately when the mixing scale is macroscopic [13] and there is negligible interaction among distinct endmembers [3, 14]. If, however, the mixing scale is microscopic (or intimate mixtures) [15, 16] and the incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [17], the linear model is no longer accurate. Linear spectral unmixing has been intensively researched in the last years [9, 10, 12, 18–21]. It considers that a mixed pixel is a linear combination of endmember signatures weighted by the correspondent abundance fractions. Under this model, and assuming that the number of substances and their reflectance spectra are known, hyperspectral unmixing is a linear problem for which many solutions have been proposed (e.g., maximum likelihood estimation [8], spectral signature matching [22], spectral angle mapper [23], subspace projection methods [24,25], and constrained least squares [26]). In most cases, the number of substances and their reflectances are not known and, then, hyperspectral unmixing falls into the class of blind source separation problems [27]. Independent component analysis (ICA) has recently been proposed as a tool to blindly unmix hyperspectral data [28–31]. ICA is based on the assumption of mutually independent sources (abundance fractions), which is not the case of hyperspectral data, since the sum of abundance fractions is constant, implying statistical dependence among them. This dependence compromises ICA applicability to hyperspectral images as shown in Refs. [21, 32]. In fact, ICA finds the endmember signatures by multiplying the spectral vectors with an unmixing matrix, which minimizes the mutual information among sources. If sources are independent, ICA provides the correct unmixing, since the minimum of the mutual information is obtained only when sources are independent. This is no longer true for dependent abundance fractions. Nevertheless, some endmembers may be approximately unmixed. These aspects are addressed in Ref. [33]. Under the linear mixing model, the observations from a scene are in a simplex whose vertices correspond to the endmembers. Several approaches [34–36] have exploited this geometric feature of hyperspectral mixtures [35]. Minimum volume transform (MVT) algorithm [36] determines the simplex of minimum volume containing the data. The method presented in Ref. [37] is also of MVT type but, by introducing the notion of bundles, it takes into account the endmember variability usually present in hyperspectral mixtures. The MVT type approaches are complex from the computational point of view. Usually, these algorithms find in the first place the convex hull defined by the observed data and then fit a minimum volume simplex to it. For example, the gift wrapping algorithm [38] computes the convex hull of n data points in a d-dimensional space with a computational complexity of O(nbd=2cþ1), where bxc is the highest integer lower or equal than x and n is the number of samples. The complexity of the method presented in Ref. [37] is even higher, since the temperature of the simulated annealing algorithm used shall follow a log( ) law [39] to assure convergence (in probability) to the desired solution. Aiming at a lower computational complexity, some algorithms such as the pixel purity index (PPI) [35] and the N-FINDR [40] still find the minimum volume simplex containing the data cloud, but they assume the presence of at least one pure pixel of each endmember in the data. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. PPI algorithm uses the minimum noise fraction (MNF) [41] as a preprocessing step to reduce dimensionality and to improve the signal-to-noise ratio (SNR). The algorithm then projects every spectral vector onto skewers (large number of random vectors) [35, 42,43]. The points corresponding to extremes, for each skewer direction, are stored. A cumulative account records the number of times each pixel (i.e., a given spectral vector) is found to be an extreme. The pixels with the highest scores are the purest ones. N-FINDR algorithm [40] is based on the fact that in p spectral dimensions, the p-volume defined by a simplex formed by the purest pixels is larger than any other volume defined by any other combination of pixels. This algorithm finds the set of pixels defining the largest volume by inflating a simplex inside the data. ORA SIS [44, 45] is a hyperspectral framework developed by the U.S. Naval Research Laboratory consisting of several algorithms organized in six modules: exemplar selector, adaptative learner, demixer, knowledge base or spectral library, and spatial postrocessor. The first step consists in flat-fielding the spectra. Next, the exemplar selection module is used to select spectral vectors that best represent the smaller convex cone containing the data. The other pixels are rejected when the spectral angle distance (SAD) is less than a given thresh old. The procedure finds the basis for a subspace of a lower dimension using a modified Gram–Schmidt orthogonalizati on. The selected vectors are then projected onto this subspace and a simplex is found by an MV T pro cess. ORA SIS is oriented to real-time target detection from uncrewed air vehicles using hyperspectral data [46]. In this chapter we develop a new algorithm to unmix linear mixtures of endmember spectra. First, the algorithm determines the number of endmembers and the signal subspace using a newly developed concept [47, 48]. Second, the algorithm extracts the most pure pixels present in the data. Unlike other methods, this algorithm is completely automatic and unsupervised. To estimate the number of endmembers and the signal subspace in hyperspectral linear mixtures, the proposed scheme begins by estimating sign al and noise correlation matrices. The latter is based on multiple regression theory. The signal subspace is then identified by selectin g the set of signal eigenvalue s that best represents the data, in the least-square sense [48,49 ], we note, however, that VCA works with projected and with unprojected data. The extraction of the end members exploits two facts: (1) the endmembers are the vertices of a simplex and (2) the affine transformation of a simplex is also a simplex. As PPI and N-FIND R algorithms, VCA also assumes the presence of pure pixels in the data. The algorithm iteratively projects data on to a direction orthogonal to the subspace spanned by the endmembers already determined. The new end member signature corresponds to the extreme of the projection. The algorithm iterates until all end members are exhausted. VCA performs much better than PPI and better than or comparable to N-FI NDR; yet it has a computational complexity between on e and two orders of magnitude lower than N-FINDR. The chapter is structure d as follows. Section 19.2 describes the fundamentals of the proposed method. Section 19.3 and Section 19.4 evaluate the proposed algorithm using simulated and real data, respectively. Section 19.5 presents some concluding remarks.
Resumo:
An energy harvesting system requires an energy storing device to store the energy retrieved from the surrounding environment. This can either be a rechargeable battery or a supercapcitor. Due to the limited lifetime of rechargeable batteries, they need to be periodically replaced. Therefore, a supercapacitor, which has ideally a limitless number of charge/discharge cycles can be used to store the energy; however, a voltage regulator is required to obtain a constant output voltage as the supercapacitor discharges. This can be implemented by a Switched-Capacitor DC-DC converter which allows a complete integration in CMOS technology, although it requires several topologies in order to obtain a high efficiency. This thesis presents the complete analysis of four different topologies in order to determine expressions that allow to design and determine the optimum input voltage ranges for each topology. To better understand the parasitic effects, the implementation of the capacitors and the non-ideal effect of the switches, in 130 nm technology, were carefully studied. With these two analysis a multi-ratio SC DC-DC converter was designed with an output power of 2 mW, maximum efficiency of 77%, and a maximum output ripple, in the steady state, of 23 mV; for an input voltage swing of 2.3 V to 0.85 V. This proposed converter has four operation states that perform the conversion ratios of 1/2, 2/3, 1/1 and 3/2 and its clock frequency is automatically adjusted to produce a stable output voltage of 1 V. These features are implemented through two distinct controller circuits that use asynchronous time machines (ASM) to dynamically adjust the clock frequency and to select the active state of the converter. All the theoretical expressions as well as the behaviour of the whole system was verified using electrical simulations.
Resumo:
Thermoplastic elastomers based on a triblock copolymer styrene-butadiene-styrene (SBS) with different butadiene/styrene ratios, block structure and carbon nanotube (CNT) content were submitted to accelerated weathering in a Xenontest set up, in order to evaluate their stability to UV ageing. It was concluded that ageing mainly depends on butadiene/styrene ratio and block structure, with radial block structures exhibiting a faster ageing than linear block structures. Moreover, the presence of carbon nanotubes in the SBS copolymer slows down the ageing of the copolymer. The evaluation of the influence of ageing on the mechanical and electrical properties demonstrates that the mechanical degradation is higher for the C401 sample, which is the SBS sample with the largest butadiene content and a radial block structure. On the other hand, a copolymer derivate from SBS, the styrene-ethylene/butadiene-styrene (SEBS) sample, retains a maximum deformation of ~1000% after 80 h of accelerated ageing. The hydrophobicity of the samples decreases with increasing ageing time, the effect being larger for the samples with higher butadiene content. It is also verified that cytotoxicity increases with increasing UV ageing with the exception of SEBS, which remains not cytotoxic up to 80 h of accelerated ageing time, demonstrating its potential for applications involving exposition to environmental conditions.
Resumo:
A low digit ratio (2D:4D) and low 2D:4D in the right compared with the left hand (right-left 2D:4D) are thought to be determined by high in utero concentrations of testosterone, and are related to "masculine" traits such as aggression and performance in sports like running and rugby. Low right-left 2D:4D is also related to sensitivity to testosterone as measured by the number of cytosine-adenine-guanine triplet repeats in exon 1 of the androgen receptor gene. Here we show that low right-left 2D:4D is associated with high maximal oxygen uptake (VO2(max)), high velocity at VO2(max), and high maximum lactate concentration in a sample of teenage boys. We suggest that low right-left 2D:4D is linked to performance in some sports because it is a proxy of high sensitivity to prenatal and maybe also circulating testosterone and high VO2(max).
Resumo:
The most frequently used method to demonstrate testosterone abuse is the determination of the testosterone and epitestosterone concentration ratio (T/E ratio) in urine. Nevertheless, it is known that factors other than testosterone administration may increase the T/E ratio. In the last years, the determination of the carbon isotope ratio has proven to be the most promising method to help discriminate between naturally elevated T/E ratios and those reflecting T use. In this paper, an excretion study following oral administration of 40 mg testosterone undecanoate initially and 13 h later is presented. Four testosterone metabolites (androsterone, etiocholanolone, 5 alpha-androstanediol, and 5 beta-androstanediol) together with an endogenous reference (5 beta-pregnanediol) were extracted from the urines and the delta(13)C/(12)C ratio of each compound was analyzed by gas chromatography-combustion-isotope ratio mass spectrometry. The results show similar maximum delta(13)C-value variations (parts per thousand difference of delta(13)C/(12)C ratio from the isotope ratio standard) for the T metabolites and concomitant changes of the T/E ratios after administration of the first and the second dose of T. Whereas the T/E ratios as well as the androsterone, etiocholanolone and 5 alpha-androstanediol delta(13)C-values returned to the baseline 15 h after the second T administration, a decrease of the 5 beta-androstanediol delta-values could be detected for over 40 h. This suggests that measurements of 5 beta-androstanediol delta-values allow the detection of a testosterone ingestion over a longer post-administration period than other T metabolites delta(13)C-values or than the usual T/E ratio approach.
Resumo:
OBJECTIVE: To compare the cumulative live birth rates obtained after cryopreservation of either pronucleate (PN) zygotes or early-cleavage (EC) embryos. DESIGN: Prospective randomized study. SETTING: University hospital. PATIENT(S): Three hundred eighty-two patients, involved in an IVF/ICSI program from January 1993 to December 1995, who had their supernumerary embryos cryopreserved either at the PN (group I) or EC (group II) stage. For 89 patients, cryopreservation of EC embryos was canceled because of poor embryo development (group III). Frozen-thawed embryo transfers performed up to December 1998 were considered. MAIN OUTCOME MEASURE(S): Age, oocytes, zygotes, cryopreserved and transferred embryos, damage after thawing, cumulative embryo scores, implantation, and cumulative live birth rates. RESULT(S): The clinical pregnancy and live birth rates were similar in all groups after fresh embryo transfers. Significantly higher implantation (10.5% vs. 5.9%) and pregnancy rates (19.5% vs. 10.9%; P< or = .02 per transfer after cryopreserved embryo transfers were obtained in group I versus group II, leading to higher cumulative pregnancy (55.5% vs. 38.6%; P < or = .002 and live birth rates (46.9% vs. 27.7%; P< or = .0001.Conclusion(s): The transfer of a maximum of three unselected embryos and freezing of all supernumerary PN zygotes can be safely done with significantly higher cumulative pregnancy chances than cryopreserving at a later EC stage.
Resumo:
Recent genome-wide association (GWA) studies described 95 loci controlling serum lipid levels. These common variants explain ∼25% of the heritability of the phenotypes. To date, no unbiased screen for gene-environment interactions for circulating lipids has been reported. We screened for variants that modify the relationship between known epidemiological risk factors and circulating lipid levels in a meta-analysis of genome-wide association (GWA) data from 18 population-based cohorts with European ancestry (maximum N = 32,225). We collected 8 further cohorts (N = 17,102) for replication, and rs6448771 on 4p15 demonstrated genome-wide significant interaction with waist-to-hip-ratio (WHR) on total cholesterol (TC) with a combined P-value of 4.79×10(-9). There were two potential candidate genes in the region, PCDH7 and CCKAR, with differential expression levels for rs6448771 genotypes in adipose tissue. The effect of WHR on TC was strongest for individuals carrying two copies of G allele, for whom a one standard deviation (sd) difference in WHR corresponds to 0.19 sd difference in TC concentration, while for A allele homozygous the difference was 0.12 sd. Our findings may open up possibilities for targeted intervention strategies for people characterized by specific genomic profiles. However, more refined measures of both body-fat distribution and metabolic measures are needed to understand how their joint dynamics are modified by the newly found locus.
Resumo:
The restricted maximum likelihood is preferred by many to the full maximumlikelihood for estimation with variance component and other randomcoefficientmodels, because the variance estimator is unbiased. It is shown that thisunbiasednessis accompanied in some balanced designs by an inflation of the meansquared error.An estimator of the cluster-level variance that is uniformly moreefficient than the fullmaximum likelihood is derived. Estimators of the variance ratio are alsostudied.
Resumo:
Background: Though commercial production of polychlorinated biphenyls was banned in the United States in 1977, exposure continues due to their environmental persistence. Several studies have examined the associationbetween environmental polychlorinated biphenyl exposure and modulations of the secondary sex ratio, with conflicting results.Objective: Our objective was to evaluate the association between maternal preconceptional occupational polychlorinated biphenyl exposure and the secondary sex ratio.Methods: We examined primipara singleton births of 2595 women, who worked in three capacitor plants at least one year during the period polychlorinated biphenyls were used. Cumulative estimated maternal occupationalpolychlorinated biphenyl exposure at the time of the infant's conception was calculated from plant-specific job exposure matrices. A logistic regression analysis was used to evaluate the association between maternalpolychlorinated biphenyl exposure and male sex at birth (yes/no).Results: Maternal body mass index at age 20, smoking status, and race did not vary between those occupationally exposed and those unexposed before the child's conception. Polychlorinated biphenyl-exposed mothers were, however, more likely to have used oral contraceptives and to have been older at the birth of their first child than non-occupationally exposed women. Among 1506 infants liveborn to polychlorinated biphenyl-exposedprimiparous women, 49.8% were male; compared to 49.9% among those not exposed (n = 1089). Multivariate analyses controlling for mother's age and year of birth found no significant association between the odds of amale birth and mother's cumulative estimated polychlorinated biphenyl exposure to time of conception.Conclusions: Based on these data, we find no evidence of altered sex ratio among children born to primiparous polychlorinated biphenyl-exposed female workers.
Resumo:
The objective of this research was to evaluate the effect of the citric acid concentration, pulp/sugar ratio, and albedo concentration of the passion fruit peel on physical, physiochemical, and sensorial characteristics of the 'Silver' banana preserves. A 2³ factorial design and 3 repetitions in the central point were used. The albedo concentration between 0 and 3% had significant influence on the reduction of the reducing sugars and on the decrease in titratable acidity. The increase in the pulp/sugar ratio exerted a negative effect on the pH and positive on the titratable acidity; the acid addition reduced the non-reducing sugar level. The sensorial evaluation and purchase intention indicated that the incorporation of a maximum of 1.5% albedo in formulations containing 50% pulp and 0.5% citric acid resulted in products with good acceptability in comparison with the formulation in which 60% pulp and an absence of acid or albedo is utilized.
Resumo:
A study was conducted to estimate variation among laboratories and between manual and automated techniques of measuring pressure on the resulting gas production profiles (GPP). Eight feeds (molassed sugarbeet feed, grass silage, maize silage, soyabean hulls, maize gluten feed, whole crop wheat silage, wheat, glucose) were milled to pass a I mm screen and sent to three laboratories (ADAS Nutritional Sciences Research Unit, UK; Institute of Grassland and Environmental Research (IGER), UK; Wageningen University, The Netherlands). Each laboratory measured GPP over 144 h using standardised procedures with manual pressure transducers (MPT) and automated pressure systems (APS). The APS at ADAS used a pressure transducer and bottles in a shaking water bath, while the APS at Wageningen and IGER used a pressure sensor and bottles held in a stationary rack. Apparent dry matter degradability (ADDM) was estimated at the end of the incubation. GPP were fitted to a modified Michaelis-Menten model assuming a single phase of gas production, and GPP were described in terms of the asymptotic volume of gas produced (A), the time to half A (B), the time of maximum gas production rate (t(RM) (gas)) and maximum gas production rate (R-M (gas)). There were effects (P<0.001) of substrate on all parameters. However, MPT produced more (P<0.001) gas, but with longer (P<0.001) B and t(RM gas) (P<0.05) and lower (P<0.001) R-M gas compared to APS. There was no difference between apparatus in ADDM estimates. Interactions occurred between substrate and apparatus, substrate and laboratory, and laboratory and apparatus. However, when mean values for MPT were regressed from the individual laboratories, relationships were good (i.e., adjusted R-2 = 0.827 or higher). Good relationships were also observed with APS, although they were weaker than for MPT (i.e., adjusted R-2 = 0.723 or higher). The relationships between mean MPT and mean APS data were also good (i.e., adjusted R 2 = 0. 844 or higher). Data suggest that, although laboratory and method of measuring pressure are sources of variation in GPP estimation, it should be possible using appropriate mathematical models to standardise data among laboratories so that data from one laboratory could be extrapolated to others. This would allow development of a database of GPP data from many diverse feeds. (c) 2005 Published by Elsevier B.V.
Resumo:
We introduce a procedure for association based analysis of nuclear families that allows for dichotomous and more general measurements of phenotype and inclusion of covariate information. Standard generalized linear models are used to relate phenotype and its predictors. Our test procedure, based on the likelihood ratio, unifies the estimation of all parameters through the likelihood itself and yields maximum likelihood estimates of the genetic relative risk and interaction parameters. Our method has advantages in modelling the covariate and gene-covariate interaction terms over recently proposed conditional score tests that include covariate information via a two-stage modelling approach. We apply our method in a study of human systemic lupus erythematosus and the C-reactive protein that includes sex as a covariate.