82 resultados para Tests for Continuous Lifetime Data


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Growth hormone (GH) influences bone mass maintenance. However, the consequences of lifetime isolated GH deficiency (IGHD) on bone are not well established. We assessed the bone status and the effect of 6 months of GH replacement in GH-naive adults with IGHD due to a homozygous mutation of the GH-releasing hormone (GHRH)-receptor gene (GHRHR). We studied 20 individuals (10 men) with IGHD at baseline, after 6 months of depot GH treatment, and 6 and 12 months after discontinuation of GH. Quantitative ultrasound (QUS) of the heel was performed and serum osteocalcin (OC) and C-terminal cross-linking telopeptide of type I collagen (ICTP) were measured. QUS was also performed at baseline and 12 months later in a group of 20 normal control individuals (CO), who did not receive GH treatment. At baseline, the IGHD group had a lower T-score on QUS than CO (-1.15 +/- 0.9 vs. -0.07 +/- 0.9, P < 0.001). GH treatment improved this parameter, with improvement persisting for 12 months post-treatment (T-score for IGHD = -0.59 +/- 0.9, P < 0.05). GH also caused an increase in serum OC (baseline vs. pGH, P < 0.001) and ICTP (baseline vs. pGH, P < 0.01). The increase in OC was more marked during treatment and its reduction was slower after GH discontinuation than in ICTP. These data suggest that lifetime severe IGHD is associated with significant reduction in QUS parameters, which are partially reversed by short-term depot GH treatment. The treatment induces a biochemical pattern of bone anabolism that persists for at least 6 months after treatment discontinuation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the present study, the participation of the Na(v)1.8 sodium channel was investigated in the development of the peripheral pro-nociceptive state induced by daily intraplantar injections of PGE(2) in rats and its regulation in vivo by protein kinase A (PKA) and protein kinase C epsilon (PKC epsilon) as well. In the prostaglandin E(2) (PGE(2))-induced persistent hypernociception, the Na(v)1.8 mRNA in the dorsal root ganglia (DRG) was up-regulated. The local treatment with dipyrone abolished this persistent hypernociception but did not alter the Na(v)1.8 mRNA level in the DRG. Daily intrathecal administrations of antisense Na(v)1.8 decreased the Na(v)1.8 mRNA in the DRG and reduced ongoing persistent hypernociception. once the persistent hypernociception had been abolished by dipyrone, but not by Na(v)1.8 antisense treatment, a small dose of PGE(2) restored the hypernociceptive plateau. These data show that, after a period of recurring inflammatory stimuli, an intense and prolonged nociceptive response is elicited by a minimum inflammatory stimulus and that this pro-nociceptive state depends on Na(v)1.8 mRNA up-regulation in the DRG. in addition, during the persistent hypernociceptive state, the PKA and PKC epsilon expression and activity in the DRG are up-regulated and the administration of the PKA and PKC epsilon inhibitors reduce the hypernociception as well as the Na(v)1.8 mRNA level. In the present study, we demonstrated that the functional regulation of the Na(v)1.8 mRNA by PKA and PKC epsilon in the primary sensory neuron is important for the development of the peripheral pro-nociceptive state induced by repetitive inflammatory stimuli and for the maintenance of the behavioral persistent hypernociception. (C) 2008 Elsevier Inc. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: The purpose of this study was to evaluate the effect of long-term use of oral contraceptives (DC) containing 0.20 mg of ethinylestradiol (EE) combined with 0.15 mg of gestodene (GEST) on the peak aerobic capacity and at the anaerobic threshold (AT) level in active and sedentary young women. Study Design: Eighty-eight women (23 +/- 2.1 years old) were divided into four groups active-OC (G1), active-NOC (G2), sedentary-OC (G3) and sedentary-NOC (G4) and were submitted to a continuous ergospirometric incremental test on a cycloergometer with 20 to 25 W min(-1) increments. Data were analyzed by two-way ANOVA with Tukey post hoc test. Level of significance was set at 5%. Results: The OC use effect for the variables relative and absolute oxygen uptake VO(2) mL kg(-1) min(-1); VO(2), L min(-1), respectively), carbon dioxide output (VCO(2), L min(-1)), ventilation (VE, L min(-1)), heart rate (HR, bpm), respiratory exchange ratio (RER) and power output (W) data, as well as the interaction between OC use and exercise effect on the peak of test and at the AT level did not differ significantly between the active groups (G1 and G2) and the sedentary groups (G3 and G4). As to the exercise effect, for all variables studied, it was noted that the active groups presented higher values for the variables VO(2), VCO(2), VE and power output (p<.05) than the sedentary groups. The RER and HR were similar (p>.05) at the peak and at the AT level between G1 vs. G3 and G2 vs. G4. Conclusions: Long-term use of OC containing EE 0.20 mg plus GEST 0.15 mg does not affect aerobic capacity at the peak and at the AT level of exercise tests. (C) 2010 Elsevier Inc. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objective. To evaluate the biaxial and short-beam uniaxial strength tests applied to resin composites based upon their Weibull parameters, fractographic features and stress distribution. Methods. Disk- (15 mm x 1 mm) and beam-shaped specimens (10 mm x 2 mm x 1 mm) of three commercial composites (Concept/Vigodent, CA; Heliomolar/Ivoclar-Vivadent, HE; Z250/3M ESPE, FZ) were prepared. After 48h dry storage at 37 degrees C, disks and beams were submitted to piston-on-three-balls (BI) and three-point bending (UNI) tests, respectively. Data were analyzed by Weibull statistics. Fractured surfaces were observed under stereomicroscope and scanning electron microscope. Maximum principal stress (sigma(1)) distribution was determined by finite element analysis (FEA). Maximum sigma(1-BI) and sigma(1-UNI) were compared to FZ strengths calculated by applying the average failure loads to the analytical equations (sigma(a-BI) and sigma(a-UNI)). Results. For BI, characteristic strengths were: 169.9a (FZ), 122.4b (CA) and 104.8c (HE), and for UNI were: 160.3a (FZ), 98.2b (CA) and 91.6b (HE). Weibull moduli ( m) were similar within the same test. CA and HE presented statistically higher m for BI. Surface pores ( BI) and edge flaws ( UNI) were the most frequent fracture origins. sigma(1-BI) was 14% lower than sigma(a-BI.) sigma(1-UNI) was 43% higher than sigma(a-UNI). Significance. Compared to the short-beam uniaxial test, the biaxial test detected more differences among composites and displayed less data scattering for two of the tested materials. Also, biaxial strength was closer to the material`s strength estimated by FEA. (C) 2009 Academy of Dental Materials. Published by Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objectives. To evaluate the effect of the microstructure on the Weibull and slow crack growth (SCG) parameters and on the lifetime of three ceramics used as framework materials for fixed partial dentures (FPDs) (YZ - Vita In-Ceram YZ; IZ - Vita In-Ceram Zirconia; AL - Vita In-Ceram AL) and of two veneering porcelains (VM7 and VM9). Methods. Bar-shaped specimens were fabricated according to the manufacturer`s instructions. Specimens were tested in three-point flexure in 37 degrees C artificial saliva. Weibull analysis (n = 30) and a constant stress-rate test (n = 10) were used to determine the Weibull modulus (m) and SCG coefficient (n), respectively. Microstructural and fractographic analyzes were performed using SEM. ANOVA and Tukey`s test (alpha = 0.05) were used to statistically analyze data obtained with both microstructural and fractographic analyzes. Results. YZ and AL presented high crystalline content and low porosity (0.1-0.2%). YZ had the highest characteristic strength (sigma(0)) value (911 MPa) followed by AL (488 MPa) and IZ (423 MPa). Lower sigma(0) values were observed for the porcelains (68-75 MPa). Except for IZ and VM7, m values were similar among the ceramic materials. Higher n values were found for YZ (76) and AL (72), followed by IZ (54) and the veneering materials (36-44). Lifetime predictions showed that YZ was the material with the best mechanical performance. The size of the critical flaw was similar among the framework materials (34-48 mu m) and among the porcelains (75-86 mu m). Significance. The microstructure influenced the mechanical and SCG behavior of the studied materials and, consequently, the lifetime predictions. (C) 2011 Academy of Dental Materials. Published by Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Statement of the Problem: Adhesive systems can spread differently onto a substrate and, consequently, influence bonding. Purpose: The purpose of this study was to evaluate the effect of differently oriented dentin surfaces and the regional variation of specimens on adhesive layer thickness and microtensile bond strength (MTBS). Materials and Methods: Twenty-four molars were sectioned mesiodistally to expose flat buccal and lingual halves. Standardized drop volumes of adhesive systems (Single Bond [SB] and Prime & Bond 2.1 [PB2.1]) were applied to dentin according to the manufacturer`s instructions. Teeth halves were randomly divided into groups: 1A-SB/parallel to gravity; 1B-SB/perpendicular to gravity; 2A-PB2.1/parallel to gravity; and 2B-PB2.1/perpendicular to gravity. The bonded assemblies were stored in 37 degrees C distilled water for 24 hours and then sectioned to obtain dentin sticks (0.8 mm(2)). The adhesive layer thickness was determined in a light microscope (x200), and after 48 hours the specimens were subjected to MTBS test. Data were analyzed by one-way and two-way analysis of variance and Student-Newman-Keuls tests. Results: Mean values (MPa +/- SD) of MTBS were: 39.1 +/- 12.9 (1A); 32.9 +/- 12.4 (1B); 52.9 +/- 15.2 (2A); and 52.3 +/- 16.5 (2B). The adhesive systems` thicknesses (mu m +/- SD) were: 11.2 +/- 2.9 (1A); 18.1 +/- 7.3 (1B); 4.2 +/- 1.8 (2A); and 3.9 +/- 1.3 (2B). No correlation between bond strength and adhesive layer thickness for both SB and PB2.1 (r = -0.224, p = 0.112 and r = 0.099, p = 0.491, respectively) was observed. Conclusions: The differently oriented dentin surfaces and the regional variation of specimens on the adhesive layer thickness are material-dependent. These variables do not influence the adhesive systems` bond strength to dentin. CLINICAL SIGNIFICANCE Adhesive systems have different viscosities and spread differently onto a substrate, influencing the bond strength and also the adhesive layer thickness. Adhesive thickness does not influence dentin bond strength, but it may impair adequate solvent evaporation, polymer conversion, and may also determine water sorption and adhesive degradation over time. In the literature, many studies have shown that the adhesive layer is a permeable membrane and can fail over timebecause ofits continuous plasticizing and degradation when in contact with water. Therefore, avoiding thick adhesive layers may minimize these problems and provide long-term success for adhesive restorations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study determined the sensory shelf life of a commercial brand of chocolate and carrot cupcakes, aiming at increasing the current 120 days of shelf life to 180. Appearance, texture, flavor and overall quality of cakes stored at six different storage times were evaluated by 102 consumers. The data were analyzed by analysis of variance and linear regression. For both flavors, the texture presented a greater loss in acceptance during the storage period, showing an acceptance mean close to indifference on the hedonic scale at 120 days. Nevertheless, appearance, flavor and overall quality stayed acceptable up to 150 days. The end of shelf life was estimated at about 161 days for chocolate cakes and 150 days for carrot cakes. This study showed that the current 120 days of shelf life can be extended to 150 days for carrot cake and to 160 days for chocolate cake. However, the 180 days of shelf life desired by the company were not achieved. PRACTICAL APPLICATIONS This research shows the adequacy of using sensory acceptance tests to determine the shelf life of two food products (chocolate and carrot cupcakes). This practical application is useful because the precise determination of the shelf life of a food product is of vital importance for its commercial success. The maximum storage time should always be evaluated in the development or reformulation of new products, changes in packing or storage conditions. Once the physical-chemical and microbiological stability of a product is guaranteed, sensorial changes that could affect consumer acceptance will determine the end of the shelf life of a food product. Thus, the use of sensitive and reliable methods to estimate the sensory shelf life of a product is very important. Findings show the importance of determining the shelf life of each product separately and to avoid using the shelf time estimated for a specific product on other, similar products.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Evidence of jet precession in many galactic and extragalactic sources has been reported in the literature. Much of this evidence is based on studies of the kinematics of the jet knots, which depends on the correct identification of the components to determine their respective proper motions and position angles on the plane of the sky. Identification problems related to fitting procedures, as well as observations poorly sampled in time, may influence the follow-up of the components in time, which consequently might contribute to a misinterpretation of the data. In order to deal with these limitations, we introduce a very powerful statistical tool to analyse jet precession: the cross-entropy method for continuous multi-extremal optimization. Only based on the raw data of the jet components (right ascension and declination offsets from the core), the cross-entropy method searches for the precession model parameters that better represent the data. In this work we present a large number of tests to validate this technique, using synthetic precessing jets built from a given set of precession parameters. With the aim of recovering these parameters, we applied the cross-entropy method to our precession model, varying exhaustively the quantities associated with the method. Our results have shown that even in the most challenging tests, the cross-entropy method was able to find the correct parameters within a 1 per cent level. Even for a non-precessing jet, our optimization method could point out successfully the lack of precession.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We report a detailed rock magnetic and Thellier paleointensity study from similar to 130.5 Ma Ponta Grossa Dike Swarms in Southern Brazil. Twenty-nine samples from seven cooling units were pre-selected for paleointensity experiments based on their low viscosity index, stable remanent magnetization and close to reversible continuous thermomagnetic curves. 19 samples characterized by negative pTRM tests, Arai concave- up curves or positive pTRM tests with NRM loss uncorrelated with TRM acquisition were rejected. High quality reliable paleointensity determinations are determined from detailed evaluation criteria, with 10 samples belonging to three dikes passing the tests. The site-mean paleointensity values obtained in this study range from 25.6 +/- 4.3 to 11.3 +/- 2.1 mu T and the corresponding VDM`s range from 5.7 +/- 0.9 to 2.5 +/- 0.5 (10(22) Am(2)). These data yield a VDM mean value of 4.1 +/- 1.6 x 10(22) Am(2). Significant variability of Earth`s magnetic field strength is observed for Ponta Grossa Dikes with the mean value being significantly lower as compared to the mean VDM obtained from the nearby Parana Magmatic Province. The paleointensities for the Ponta Grossa Dikes are in agreement with absolute paleointensities retrieved from the submarine basaltic glasses from 130 to 120 Ma. It seems that a relatively low field prevailed just before the Cretaceous Normal Superchron.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Evolutionary novelties in the skeleton are usually expressed as changes in the timing of growth of features intrinsically integrated at different hierarchical levels of development(1). As a consequence, most of the shape- traits observed across species do vary quantitatively rather than qualitatively(2), in a multivariate space(3) and in a modularized way(4,5). Because most phylogenetic analyses normally use discrete, hypothetically independent characters(6), previous attempts have disregarded the phylogenetic signals potentially enclosed in the shape of morphological structures. When analysing low taxonomic levels, where most variation is quantitative in nature, solving basic requirements like the choice of characters and the capacity of using continuous, integrated traits is of crucial importance in recovering wider phylogenetic information. This is particularly relevant when analysing extinct lineages, where available data are limited to fossilized structures. Here we show that when continuous, multivariant and modularized characters are treated as such, cladistic analysis successfully solves relationships among main Homo taxa. Our attempt is based on a combination of cladistics, evolutionary- development- derived selection of characters, and geometric morphometrics methods. In contrast with previous cladistic analyses of hominid phylogeny, our method accounts for the quantitative nature of the traits, and respects their morphological integration patterns. Because complex phenotypes are observable across different taxonomic groups and are potentially informative about phylogenetic relationships, future analyses should point strongly to the incorporation of these types of trait.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Searching in a dataset for elements that are similar to a given query element is a core problem in applications that manage complex data, and has been aided by metric access methods (MAMs). A growing number of applications require indices that must be built faster and repeatedly, also providing faster response for similarity queries. The increase in the main memory capacity and its lowering costs also motivate using memory-based MAMs. In this paper. we propose the Onion-tree, a new and robust dynamic memory-based MAM that slices the metric space into disjoint subspaces to provide quick indexing of complex data. It introduces three major characteristics: (i) a partitioning method that controls the number of disjoint subspaces generated at each node; (ii) a replacement technique that can change the leaf node pivots in insertion operations; and (iii) range and k-NN extended query algorithms to support the new partitioning method, including a new visit order of the subspaces in k-NN queries. Performance tests with both real-world and synthetic datasets showed that the Onion-tree is very compact. Comparisons of the Onion-tree with the MM-tree and a memory-based version of the Slim-tree showed that the Onion-tree was always faster to build the index. The experiments also showed that the Onion-tree significantly improved range and k-NN query processing performance and was the most efficient MAM, followed by the MM-tree, which in turn outperformed the Slim-tree in almost all the tests. (C) 2010 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we formulate a flexible density function from the selection mechanism viewpoint (see, for example, Bayarri and DeGroot (1992) and Arellano-Valle et al. (2006)) which possesses nice biological and physical interpretations. The new density function contains as special cases many models that have been proposed recently in the literature. In constructing this model, we assume that the number of competing causes of the event of interest has a general discrete distribution characterized by its probability generating function. This function has an important role in the selection procedure as well as in computing the conditional personal cure rate. Finally, we illustrate how various models can be deduced as special cases of the proposed model. (C) 2011 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we develop a flexible cure rate survival model by assuming the number of competing causes of the event of interest to follow the Conway-Maxwell Poisson distribution. This model includes as special cases some of the well-known cure rate models discussed in the literature. Next, we discuss the maximum likelihood estimation of the parameters of this cure rate survival model. Finally, we illustrate the usefulness of this model by applying it to a real cutaneous melanoma data. (C) 2009 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Model trees are a particular case of decision trees employed to solve regression problems. They have the advantage of presenting an interpretable output, helping the end-user to get more confidence in the prediction and providing the basis for the end-user to have new insight about the data, confirming or rejecting hypotheses previously formed. Moreover, model trees present an acceptable level of predictive performance in comparison to most techniques used for solving regression problems. Since generating the optimal model tree is an NP-Complete problem, traditional model tree induction algorithms make use of a greedy top-down divide-and-conquer strategy, which may not converge to the global optimal solution. In this paper, we propose a novel algorithm based on the use of the evolutionary algorithms paradigm as an alternate heuristic to generate model trees in order to improve the convergence to globally near-optimal solutions. We call our new approach evolutionary model tree induction (E-Motion). We test its predictive performance using public UCI data sets, and we compare the results to traditional greedy regression/model trees induction algorithms, as well as to other evolutionary approaches. Results show that our method presents a good trade-off between predictive performance and model comprehensibility, which may be crucial in many machine learning applications. (C) 2010 Elsevier Inc. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The concentrations of the water-soluble inorganic aerosol species, ammonium (NH4+), nitrate (NO3-), chloride (Cl-), and sulfate (SO42-), were measured from September to November 2002 at a pasture site in the Amazon Basin (Rondnia, Brazil) (LBA-SMOCC). Measurements were conducted using a semi-continuous technique (Wet-annular denuder/Steam-Jet Aerosol Collector: WAD/SJAC) and three integrating filter-based methods, namely (1) a denuder-filter pack (DFP: Teflon and impregnated Whatman filters), (2) a stacked-filter unit (SFU: polycarbonate filters), and (3) a High Volume dichotomous sampler (HiVol: quartz fiber filters). Measurements covered the late dry season (biomass burning), a transition period, and the onset of the wet season (clean conditions). Analyses of the particles collected on filters were performed using ion chromatography (IC) and Particle-Induced X-ray Emission spectrometry (PIXE). Season-dependent discrepancies were observed between the WAD/SJAC system and the filter-based samplers. During the dry season, when PM2.5 (D-p <= 2.5 mu m) concentrations were similar to 100 mu g m(-3), aerosol NH4+ and SO42- measured by the filter-based samplers were on average two times higher than those determined by the WAD/SJAC. Concentrations of aerosol NO3- and Cl- measured with the HiVol during daytime, and with the DFP during day- and nighttime also exceeded those of the WAD/SJAC by a factor of two. In contrast, aerosol NO3- and Cl- measured with the SFU during the dry season were nearly two times lower than those measured by the WAD/SJAC. These differences declined markedly during the transition period and towards the cleaner conditions during the onset of the wet season (PM2.5 similar to 5 mu g m(-3)); when filter-based samplers measured on average 40-90% less than the WAD/SJAC. The differences were not due to consistent systematic biases of the analytical techniques, but were apparently a result of prevailing environmental conditions and different sampling procedures. For the transition period and wet season, the significance of our results is reduced by a low number of data points. We argue that the observed differences are mainly attributable to (a) positive and negative filter sampling artifacts, (b) presence of organic compounds and organosulfates on filter substrates, and (c) a SJAC sampling efficiency of less than 100%.