24 resultados para Which-way experiments
em Biblioteca Digital da Produção Intelectual da Universidade de São Paulo
Resumo:
Wild bearded capuchin monkeys, Cebus libidinosus, use stone tools to crack palm nuts to obtain the kernel. In five experiments, we gave 10 monkeys from one wild group of bearded capuchins a choice of two nuts differing in resistance and size and/or two manufactured stones of the same shape, volume and composition but different mass. Monkeys consistently selected the nut that was easier to crack and the heavier stone. When choosing between two stones differing in mass by a ratio of 1.3:1, monkeys frequently touched the stones or tapped them with their fingers or with a nut. They showed these behaviours more frequently before making their first selection of a stone than afterward. These results suggest that capuchins discriminate between nuts and between stones, selecting materials that allow them to crack nuts with fewer strikes, and generate exploratory behaviours to discriminate stones of varying mass. In the final experiment, humans effectively discriminated the mass of stones using the same tapping and handling behaviours as capuchins. Capuchins explore objects in ways that allow them to perceive invariant properties (e.g. mass) of objects, enabling selection of objects for specific uses. We predict that species that use tools will generate behaviours that reveal invariant properties of objects such as mass; species that do not use tools are less likely to explore objects in this way. The precision with which individuals can judge invariant properties may differ considerably, and this also should predict prevalence of tool use across species. (C) 2010 The Association for the Study of Animal Behaviour. Published by Elsevier Ltd. All rights reserved.
Resumo:
In the Amazon Basin, within a landscape of infertile soils, fertile Anthrosols of pre-Columbian origin occur (Amazonian Dark Earths or terra preta de Indio). These soils are characterized by high amounts of charred organic matter (black carbon, biochar) and high nutrient stocks. Frequently, they were considered as sign for intensive landscape domestication by way of sedentary agriculture and as sign for large settlements in pre-Columbian Amazonia. Beyond the archaeological interest in Amazonian Dark Earths, they increasingly receive attention because it is assumed that they could serve as a model for sustainable agriculture in the humid tropics (terra preta nova). Both questions lack information about the pre-Columbian practices which were responsible for the genesis of Amazonian Dark Earths. It has often been hypothesized that deposition of faeces could have contributed to the high nutrient stocks in these soils, but no study has focussed on this question yet. We analyzed the biomarkers for faeces 5 beta-stanols as well as their precursors and their 5 alpha-isomers in Amazonian Dark Earths and reference soils to investigate the input of faeces into Amazonian Dark Earths. Using Amazonian Dark Earths as example, we discuss the application of threshold values for specific stanols to evaluate faeces deposition in archaeological soils and demonstrate an alternative approach which is based on a comparison of the concentration patterns of 5 beta-stanols with the concentration patterns of their precursors and their 5 alpha-isomers as well as with local backgrounds. The concentration patterns of sterols show that faeces were deposited on Amazonian Dark Earths. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
According to recent research carried out in the foundry sector, one of the most important concerns of the industries is to improve their production planning. A foundry production plan involves two dependent stages: (1) determining the alloys to be merged and (2) determining the lots that will be produced. The purpose of this study is to draw up plans of minimum production cost for the lot-sizing problem for small foundries. As suggested in the literature, the proposed heuristic addresses the problem stages in a hierarchical way. Firstly, the alloys are determined and, subsequently, the items that are produced from them. In this study, a knapsack problem as a tool to determine the items to be produced from furnace loading was proposed. Moreover, we proposed a genetic algorithm to explore some possible sets of alloys and to determine the production planning for a small foundry. Our method attempts to overcome the difficulties in finding good production planning presented by the method proposed in the literature. The computational experiments show that the proposed methods presented better results than the literature. Furthermore, the proposed methods do not need commercial software, which is favorable for small foundries. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
Purpose: To assess the influence of ozone gas and ozonated water application to prepared cavity and bonded interfaces on the resin/dentin bond strength of two-step etch-and-rinse adhesive systems (Adper Single Bond 2 [SB2] and XP-Bond [XP]). Materials and Methods: Sixty extracted human third molars were sectioned perpendicularly to their long axes to expose flat occlusal dentin surfaces. In experiment 1, dentin was treated with ozone before the bonding procedure, while in experiment 2, ozone was applied to resin/dentin bonded interfaces. In experiment 1, dentin surfaces were treated either with ozone gas (2100 ppm), ozonated water (3.5 ppm), or distilled water for 120 s, and then bonded with SB2 or XP according to manufacturers' instructions. Hybrid composite buildups were incrementally constructed and the teeth were sectioned into resin-dentin sticks (0.8 mm(2)). In experiment 2, dentin surfaces were first bonded with SB2 or XP, composite buildups were constructed, and bonded sticks obtained. The sticks were treated with ozone as previously described. Bonded sticks were tested under tensile stress at 1 mm/min. Silver nitrate impregnation along the resin/dentin interfaces was also evaluated under SEM. Results: Two-way ANOVA (adhesive and ozone treatment) detected no significant effect for the cross-product interaction and the main factors in the two experiments (p > 0.05), which was confirmed by the photomicrographs. Conclusion: Ozone gas and ozonated water used before the bonding procedure or on resin/dentin bonded interfaces have no deleterious effects on the bond strengths and interfaces.
Resumo:
Correlations between GABA(A) receptor (GABA(A)-R) activity and molecular organization of synaptosomal membranes (SM) were studied along the protocol for cholesterol (Cho) extraction with beta-cyclodextrin (beta-CD). The mere pre-incubation (PI) at 37A degrees C accompanying the beta-CD treatment was an underlying source of perturbations increasing [H-3]-FNZ maximal binding (70%) and K (d) (38%), plus a stiffening of SMs' hydrocarbon core region. The latter was inferred from an increased compressibility modulus (K) of SM-derived Langmuir films, a blue-shifted DPH fluorescence emission spectrum and the hysteresis in DPH fluorescence anisotropy (A (DPH)) in SMs submitted to a heating-cooling cycle (4-37-4A degrees C) with A (DPH,heating) < A (DPH,cooling). Compared with PI samples, the beta-CD treatment reduced B (max) by 5% which correlated with a 45%-decrement in the relative Cho content of SM, a decrease in K and in the order parameter in the EPR spectrum of a lipid spin probe labeled at C5 (5-SASL), and significantly increased A (TMA-DPH). PI, but not beta-CD treatment, could affect the binding affinity. EPR spectra of 5-SASL complexes with beta-CD-, SM-partitioned, and free in solution showed that, contrary to what is usually assumed, beta-CD is not completely eliminated from the system through centrifugation washings. It was concluded that beta-CD treatment involves effects of at least three different types of events affecting membrane organization: (a) effect of PI on membrane annealing, (b) effect of residual beta-CD on SM organization, and (c) Cho depletion. Consequently, molecular stiffness increases within the membrane core and decreases near the polar head groups, leading to a net increase in GABA(A)-R density, relative to untreated samples.
Resumo:
Within the nutritional context, the supplementation of microminerals in bird food is often made in quantities exceeding those required in the attempt to ensure the proper performance of the animals. The experiments of type dosage x response are very common in the determination of levels of nutrients in optimal food balance and include the use of regression models to achieve this objective. Nevertheless, the regression analysis routine, generally, uses a priori information about a possible relationship between the response variable. The isotonic regression is a method of estimation by least squares that generates estimates which preserves data ordering. In the theory of isotonic regression this information is essential and it is expected to increase fitting efficiency. The objective of this work was to use an isotonic regression methodology, as an alternative way of analyzing data of Zn deposition in tibia of male birds of Hubbard lineage. We considered the models of plateau response of polynomial quadratic and linear exponential forms. In addition to these models, we also proposed the fitting of a logarithmic model to the data and the efficiency of the methodology was evaluated by Monte Carlo simulations, considering different scenarios for the parametric values. The isotonization of the data yielded an improvement in all the fitting quality parameters evaluated. Among the models used, the logarithmic presented estimates of the parameters more consistent with the values reported in literature.
Resumo:
Background: Ankle-brachial index (ABI) can access peripheral artery disease and predict mortality in prevalent patients on hemodialysis. However, ABI has not yet been tested in incident patients, who present significant mortality. Typically, ABI is measured by Doppler, which is not always available, limiting its use in most patients. We therefore hypothesized that ABI, evaluated by a simplified method, can predict mortality in an incident hemodialysis population. Methodology/Principal Findings: We studied 119 patients with ESRD who had started hemodialysis three times weekly. ABI was calculated by using two oscillometric blood pressure devices simultaneously. Patients were followed until death or the end of the study. ABI was categorized in two groups normal (0.9-1.3) or abnormal (<0.9 and >1.3). There were 33 deaths during a median follow-up of 12 months (from 3 to 24 months). Age (1 year) (hazard of ratio, 1.026; p = 0.014) and ABI abnormal (hazard ratio, 3.664; p = 0.001) were independently related to mortality in a multiple regression analysis. Conclusions: An easy and inexpensive technique to measure ABI was tested and showed to be significant in predicting mortality. Both low and high ABI were associated to mortality in incident patients on hemodialysis. This technique allows nephrologists to identify high-risk patients and gives the opportunity of early intervention that could alter the natural progression of this population.
Resumo:
Through-wall imaging (TWI) may provide vital information on interior environment in cases when physically entering such environment would pose danger to the person involved. The concept of ultra wideband radar (UWB radar) is an emerging technology, which offers high spatial resolution, as opposed to narrow band radars. Thus, TWI applications using UWB radar have become a growing field of research with several applications in the civil and defense areas such as rescue and surveillance. For this study, a prototype system of UWB radar to TWI has been built. Analyses and result to several kinds of experiments have been presented, that is, detection and visualization of metallic targets behind wooden board wall and concrete blocks wall. The results are encouraging and show the advantages of using UWB radar for TWI. (C) 2011 Wiley Periodicals, Inc. Microwave Opt Technol Lett, 54:339-344, 2012; View this article online at wileyonlinelibrary.com. DOI 10.1002/mop.26543
Resumo:
Further advances in magnetic hyperthermia might be limited by biological constraints, such as using sufficiently low frequencies and low field amplitudes to inhibit harmful eddy currents inside the patient's body. These incite the need to optimize the heating efficiency of the nanoparticles, referred to as the specific absorption rate (SAR). Among the several properties currently under research, one of particular importance is the transition from the linear to the non-linear regime that takes place as the field amplitude is increased, an aspect where the magnetic anisotropy is expected to play a fundamental role. In this paper we investigate the heating properties of cobalt ferrite and maghemite nanoparticles under the influence of a 500 kHz sinusoidal magnetic field with varying amplitude, up to 134 Oe. The particles were characterized by TEM, XRD, FMR and VSM, from which most relevant morphological, structural and magnetic properties were inferred. Both materials have similar size distributions and saturation magnetization, but strikingly different magnetic anisotropies. From magnetic hyperthermia experiments we found that, while at low fields maghemite is the best nanomaterial for hyperthermia applications, above a critical field, close to the transition from the linear to the non-linear regime, cobalt ferrite becomes more efficient. The results were also analyzed with respect to the energy conversion efficiency and compared with dynamic hysteresis simulations. Additional analysis with nickel, zinc and copper-ferrite nanoparticles of similar sizes confirmed the importance of the magnetic anisotropy and the damping factor. Further, the analysis of the characterization parameters suggested core-shell nanostructures, probably due to a surface passivation process during the nanoparticle synthesis. Finally, we discussed the effect of particle-particle interactions and its consequences, in particular regarding discrepancies between estimated parameters and expected theoretical predictions. Copyright 2012 Author(s). This article is distributed under a Creative Commons Attribution 3.0 Unported License. [http://dx.doi. org/10.1063/1.4739533]
Resumo:
At each outer iteration of standard Augmented Lagrangian methods one tries to solve a box-constrained optimization problem with some prescribed tolerance. In the continuous world, using exact arithmetic, this subproblem is always solvable. Therefore, the possibility of finishing the subproblem resolution without satisfying the theoretical stopping conditions is not contemplated in usual convergence theories. However, in practice, one might not be able to solve the subproblem up to the required precision. This may be due to different reasons. One of them is that the presence of an excessively large penalty parameter could impair the performance of the box-constraint optimization solver. In this paper a practical strategy for decreasing the penalty parameter in situations like the one mentioned above is proposed. More generally, the different decisions that may be taken when, in practice, one is not able to solve the Augmented Lagrangian subproblem will be discussed. As a result, an improved Augmented Lagrangian method is presented, which takes into account numerical difficulties in a satisfactory way, preserving suitable convergence theory. Numerical experiments are presented involving all the CUTEr collection test problems.
Resumo:
The objective of the present work was to propose a method for testing the contribution of each level of the factors in a genotypes x environments (GxE) interaction using multi-environment trials analyses by means of an F test. The study evaluated a data set, with twenty genotypes and thirty-four environments, in a block design with four replications. The sum of squares within rows (genotypes) and columns (environments) of the GxE matrix was simulated, generating 10000 experiments to verify the empirical distribution. Results indicate a noncentral chi-square distribution for rows and columns of the GxE interaction matrix, which was also verified by the Kolmogorov-Smirnov test and Q-Q plot. Application of the F test identified the genotypes and environments that contributed the most to the GxE interaction. In this way, geneticists can select good genotypes in their studies.
Resumo:
In the framework of gauged flavour symmetries, new fermions in parity symmetric representations of the standard model are generically needed for the compensation of mixed anomalies. The key point is that their masses are also protected by flavour symmetries and some of them are expected to lie way below the flavour symmetry breaking scale(s), which has to occur many orders of magnitude above the electroweak scale to be compatible with the available data from flavour changing neutral currents and CP violation experiments. We argue that, actually, some of these fermions would plausibly get masses within the LHC range. If they are taken to be heavy quarks and leptons, in (bi)-fundamental representations of the standard model symmetries, their mixings with the light ones are strongly constrained to be very small by electroweak precision data. The alternative chosen here is to exactly forbid such mixings by breaking of flavour symmetries into an exact discrete symmetry, the so-called proton-hexality, primarily suggested to avoid proton decay. As a consequence of the large value needed for the flavour breaking scale, those heavy particles are long-lived and rather appropriate for the current and future searches at the LHC for quasi-stable hadrons and leptons. In fact, the LHC experiments have already started to look for them.
Resumo:
Context. The Milky Way (MW) bulge is a fundamental Galactic component for understanding the formation and evolution of galaxies, in particular our own. The ESO Public Survey VISTA Variables in the Via Lactea is a deep near-IR survey mapping the Galactic bulge and southern plane. Particularly for the bulge area, VVV is covering similar to 315 deg(2). Data taken during 2010 and 2011 covered the entire bulge area in the JHKs bands. Aims. We used VVV data for the whole bulge area as a single and homogeneous data set to build for the first time a single colour-magnitude diagram (CMD) for the entire Galactic bulge. Methods. Photometric data in the JHK(s) bands were combined to produce a single and huge data set containing 173 150 467 sources in the three bands, for the similar to 315 deg(2) covered by VVV in the bulge. Selecting only the data points flagged as stellar, the total number of sources is 84 095 284. Results. We built the largest colour-magnitude diagrams published up to date, containing 173.1+ million sources for all data points, and more than 84.0 million sources accounting for the stellar sources only. The CMD has a complex shape, mostly owing to the complexity of the stellar population and the effects of extinction and reddening towards the Galactic centre. The red clump (RC) giants are seen double in magnitude at b similar to -8 degrees-10 degrees, while in the inner part (b similar to -3 degrees) they appear to be spreading in colour, or even splitting into a secondary peak. Stellar population models show the predominance of main-sequence and giant stars. The analysis of the outermost bulge area reveals a well-defined sequence of late K and M dwarfs, seen at (J - K-s) similar to 0.7-0.9 mag and K-s greater than or similar to 14 mag. Conclusions. The interpretation of the CMD yields important information about the MW bulge, showing the fingerprint of its structure and content. We report a well-defined red dwarf sequence in the outermost bulge, which is important for the planetary transit searches of VVV. The double RC in magnitude seen in the outer bulge is the signature of the X-shaped MW bulge, while the spreading of the RC in colour, and even its splitting into a secondary peak, are caused by reddening effects. The region around the Galactic centre is harder to interpret because it is strongly affected by reddening and extinction.
Resumo:
A specific separated-local-field NMR experiment, dubbed Dipolar-Chemical-Shift Correlation (DIPSHIFT) is frequently used to study molecular motions by probing reorientations through the changes in XH dipolar coupling and T-2. In systems where the coupling is weak or the reorientation angle is small, a recoupled variant of the DIPSHIFT experiment is applied, where the effective dipolar coupling is amplified by a REDOR-like pi-pulse train. However, a previously described constant-time variant of this experiment is not sensitive to the motion-induced T-2 effect, which precludes the observation of motions over a large range of rates ranging from hundreds of Hz to around a MHz. We present a DIPSHIFT implementation which amplifies the dipolar couplings and is still sensitive to T-2 effects. Spin dynamics simulations, analytical calculations and experiments demonstrate the sensitivity of the technique to molecular motions, and suggest the best experimental conditions to avoid imperfections. Furthermore, an in-depth theoretical analysis of the interplay of REDOR-like recoupling and proton decoupling based on Average-Hamiltonian Theory was performed, which allowed explaining the origin of many artifacts found in literature data. (C) 2012 Elsevier Inc. All rights reserved.
Resumo:
Abstract Background The expression of glucocorticoid-receptor (GR) seems to be a key mechanism in the regulation of glucocorticoid (GC) sensitivity and is potentially involved in cases of GC resistance or hypersensitivity. The aim of this study is to describe a method for quantitation of GR alpha isoform (GRα) expression using real-time PCR (qrt-PCR) with analytical capabilities to monitor patients, offering standard-curve reproducibility as well as intra- and inter-assay precision. Results Standard-curves were constructed by employing standardized Jurkat cell culture procedures, both for GRα and BCR (breakpoint cluster region), as a normalizing gene. We evaluated standard-curves using five different sets of cell culture passages, RNA extraction, reverse transcription, and qrt-PCR quantification. Intra-assay precision was evaluated using 12 replicates of each gene, for 2 patients, in a single experiment. Inter-assay precision was evaluated on 8 experiments, using duplicate tests of each gene for two patients. Standard-curves were reproducible, with CV (coefficient of variation) of less than 11%, and Pearson correlation coefficients above 0,990 for most comparisons. Intra-assay and inter-assay were 2% and 7%, respectively. Conclusion This is the first method for quantitation of GRα expression with technical characteristics that permit patient monitoring, in a fast, simple and robust way.