893 resultados para Songs (Low voice) with instrumental ensemble.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Background: Cryptococcus neoformans causes meningitis and disseminated infection in healthy individuals, but more commonly in hosts with defective immune responses. Cell-mediated immunity is an important component of the immune response to a great variety of infections, including yeast infections. We aimed to evaluate a specific lymphocyte transformation assay to Cryptococcus neoformans in order to identify immunodeficiency associated to neurocryptococcosis (NCC) as primary cause of the mycosis. Methods: Healthy volunteers, poultry growers, and HIV-seronegative patients with neurocryptococcosis were tested for cellular immune response. Cryptococcal meningitis was diagnosed by India ink staining of cerebrospinal fluid and cryptococcal antigen test (Immunomycol-Inc, SP, Brazil). Isolated peripheral blood mononuclear cells were stimulated with C. neoformans antigen, C. albicans antigen, and pokeweed mitogen. The amount of H-3-thymidine incorporated was assessed, and the results were expressed as stimulation index (SI) and log SI, sensitivity, specificity, and cut-off value (receiver operating characteristics curve). We applied unpaired Student t tests to compare data and considered significant differences for p<0.05. Results: The lymphotoxin alpha showed a low capacity with all the stimuli for classifying patients as responders and non-responders. Lymphotoxin alpha stimulated by heated-killed antigen from patients with neurocryptococcosis was not affected by TCD4+ cell count, and the intensity of response did not correlate with the clinical evolution of neurocryptococcosis. Conclusion: Response to lymphocyte transformation assay should be analyzed based on a normal range and using more than one stimulator. The use of a cut-off value to classify patients with neurocryptococcosis is inadequate. Statistical analysis should be based on the log transformation of SI. A more purified antigen for evaluating specific response to C. neoformans is needed.
Resumo:
Abstract Background Toxoplasma gondii is an intracellular parasite that causes relevant clinical disease in humans and animals. Several studies have been performed in order to understand the interactions between proteins of the parasite and host cells. SAG2A is a 22 kDa protein that is mainly found in the surface of tachyzoites. In the present work, our aim was to correlate the predicted three-dimensional structure of this protein with the immune system of infected hosts. Methods To accomplish our goals, we performed in silico analysis of the amino acid sequence of SAG2A, correlating the predictions with in vitro stimulation of antigen presenting cells and serological assays. Results Structure modeling predicts that SAG2A protein possesses an unfolded C-terminal end, which varies its conformation within distinct strain types of T. gondii. This structure within the protein shelters a known B-cell immunodominant epitope, which presents low identity with its closest phyllogenetically related protein, an orthologue predicted in Neospora caninum. In agreement with the in silico observations, sera of known T. gondii infected mice and goats recognized recombinant SAG2A, whereas no serological cross-reactivity was observed with samples from N. caninum animals. Additionally, the C-terminal end of the protein was able to down-modulate pro-inflammatory responses of activated macrophages and dendritic cells. Conclusions Altogether, we demonstrate herein that recombinant SAG2A protein from T. gondii is immunologically relevant in the host-parasite interface and may be targeted in therapeutic and diagnostic procedures designed against the infection.
Resumo:
This thesis gives an overview of the history of gold per se, of gold as an investment good and offers some institutional details about gold and other precious metal markets. The goal of this study is to investigate the role of gold as a store of value and hedge against negative market movements in turbulent times. I investigate gold’s ability to act as a safe haven during periods of financial stress by employing instrumental variable techniques that allow for time varying conditional covariance. I find broad evidence supporting the view that gold acts as an anchor of stability during market downturns. During periods of high uncertainty and low stock market returns, gold tends to have higher than average excess returns. The effectiveness of gold as a safe haven is enhanced during periods of extreme crises: the largest peaks are observed during the global financial crises of 2007-2009 and, in particular, during the Lehman default (October 2008). A further goal of this thesis is to investigate whether gold provides protection from tail risk. I address the issue of asymmetric precious metal behavior conditioned to stock market performance and provide empirical evidence about the contribution of gold to a portfolio’s systematic skewness and kurtosis. I find that gold has positive coskewness with the market portfolio when the market is skewed to the left. Moreover, gold shows low cokurtosis with the market returns during volatile periods. I therefore show that gold is a desirable investment good to risk averse investors, since it tends to decrease the probability of experiencing extreme bad outcomes, and the magnitude of losses in case such events occur. Gold thus bears very important and under-researched characteristics as an asset class per se, which this thesis contributed to address and unveil.
Resumo:
PURPOSE To develop a score predicting the risk of adverse events (AEs) in pediatric patients with cancer who experience fever and neutropenia (FN) and to evaluate its performance. PATIENTS AND METHODS Pediatric patients with cancer presenting with FN induced by nonmyeloablative chemotherapy were observed in a prospective multicenter study. A score predicting the risk of future AEs (ie, serious medical complication, microbiologically defined infection, radiologically confirmed pneumonia) was developed from a multivariate mixed logistic regression model. Its cross-validated predictive performance was compared with that of published risk prediction rules. Results An AE was reported in 122 (29%) of 423 FN episodes. In 57 episodes (13%), the first AE was known only after reassessment after 8 to 24 hours of inpatient management. Predicting AE at reassessment was better than prediction at presentation with FN. A differential leukocyte count did not increase the predictive performance. The score predicting future AE in 358 episodes without known AE at reassessment used the following four variables: preceding chemotherapy more intensive than acute lymphoblastic leukemia maintenance (weight = 4), hemoglobin > or = 90 g/L (weight = 5), leukocyte count less than 0.3 G/L (weight = 3), and platelet count less than 50 G/L (weight = 3). A score (sum of weights) > or = 9 predicted future AEs. The cross-validated performance of this score exceeded the performance of published risk prediction rules. At an overall sensitivity of 92%, 35% of the episodes were classified as low risk, with a specificity of 45% and a negative predictive value of 93%. CONCLUSION This score, based on four routinely accessible characteristics, accurately identifies pediatric patients with cancer with FN at risk for AEs after reassessment.
Resumo:
This article contributes to the research on demographics and public health of urban populations of preindustrial Europe. The key source is a burial register that contains information on the deceased, such as age and sex, residence and cause of death. This register is one of the earliest compilations of data sets of individuals with this high degree of completeness and consistency. Critical assessment of the register's origin, formation and upkeep promises high validity and reliability. Between 1805 and 1815, 4,390 deceased inhabitants were registered. Information concerning these individuals provides the basis for this study. Life tables of Bern's population were created using different models. The causes of death were classified and their frequency calculated. Furthermore, the susceptibility of age groups to certain causes of death was established. Special attention was given to causes of death and mortality of newborns, infants and birth-giving women. In comparison to other cities and regions in Central Europe, Bern's mortality structure shows low rates for infants (q0=0.144) and children (q1-4=0.068). This could have simply indicated better living conditions. Life expectancy at birth was 43 years. Mortality was high in winter and spring, and decreased in summer to a low level with a short rise in August. The study of the causes of death was inhibited by difficulties in translating early 19th century nomenclature into the modern medical system. Nonetheless, death from metabolic disorders, illnesses of the respiratory system, and debilitation were the most prominent causes in Bern. Apparently, the worst killer of infants up to 12 months was the "gichteren", an obsolete German term for lethal spasmodic convulsions. The exact modern identification of this disease remains unclear. Possibilities such as infant tetanus or infant epilepsy are discussed. The maternal death rate of 0.72% is comparable with values calculated from contemporaneous sources. Relevance of childbed fever in the early 1800s was low. Bern's data indicate that the extent of deaths related to childbirth in this period is overrated. This research has an explicit interdisciplinary value for various fields including both the humanities and natural sciences, since information reported here represents the complete age and sex structure of a deceased population. Physical anthropologists can use these data as a true reference group for their palaeodemographic studies of preindustrial Central Europe of the late 18th and early 19th century. It is a call to both historians and anthropologists to use our resources to a better effect through combination of methods and exchange of knowledge.
Resumo:
Hooking up has become a common and public practice on university campuses across the country. While much research has determined who is doing it, with whom they are doing it, and what they are hoping to get out of it, little work has been done to determine what personal factors motivate students to participate in the culture. A total of 407 current students were surveyed to assess the impact of one’s relationship with his/her opposite-sex parent on his/her attitudestoward and engagement in hookup culture on campus. Scores were assigned to the participants to divide them into categories of high and low attachment with their parent. It was hypothesizedthat heterosexual students who do not perceive themselves as having a strong, close, positive relationship with their opposite-sex parent would be more likely to engage in or attempt to engage in casual sexual behavior. This pattern was expected to be strongest for women on campus. Men and women differed in their reasons for hooking up, with whom they hook up, to what they attribute the behaviors of their peers, and what they hope to gain from their sexual interactions. Effects of parent-child relationships were significant only for women who reported hooking up because “others are doing it,” men’s agreement with the behavior of their peers, and women’s overall satisfaction with their hookups. Developmental, social, and evolutionary perspectives are employed to explain the results. University status was determined to be most telling of the extent to which a student is engaged in hookup culture.
Resumo:
BACKGROUND: Highly active antiretroviral therapy (HAART) is being scaled up in developing countries. We compared baseline characteristics and outcomes during the first year of HAART between HIV-1-infected patients in low-income and high-income settings. METHODS: 18 HAART programmes in Africa, Asia, and South America (low-income settings) and 12 HIV cohort studies from Europe and North America (high-income settings) provided data for 4810 and 22,217, respectively, treatment-naive adult patients starting HAART. All patients from high-income settings and 2725 (57%) patients from low-income settings were actively followed-up and included in survival analyses. FINDINGS: Compared with high-income countries, patients starting HAART in low-income settings had lower CD4 cell counts (median 108 cells per muL vs 234 cells per muL), were more likely to be female (51%vs 25%), and more likely to start treatment with a non-nucleoside reverse transcriptase inhibitor (NNRTI) (70%vs 23%). At 6 months, the median number of CD4 cells gained (106 cells per muL vs 103 cells per muL) and the percentage of patients reaching HIV-1 RNA levels lower than 500 copies/mL (76%vs 77%) were similar. Mortality was higher in low-income settings (124 deaths during 2236 person-years of follow-up) than in high-income settings (414 deaths during 20,532 person-years). The adjusted hazard ratio (HR) of mortality comparing low-income with high-income settings fell from 4.3 (95% CI 1.6-11.8) during the first month to 1.5 (0.7-3.0) during months 7-12. The provision of treatment free of charge in low-income settings was associated with lower mortality (adjusted HR 0.23; 95% CI 0.08-0.61). INTERPRETATION: Patients starting HAART in resource-poor settings have increased mortality rates in the first months on therapy, compared with those in developed countries. Timely diagnosis and assessment of treatment eligibility, coupled with free provision of HAART, might reduce this excess mortality.
Resumo:
The number of record-breaking events expected to occur in a strictly stationary time-series depends only on the number of values in the time-series, regardless of distribution. This holds whether the events are record-breaking highs or lows and whether we count from past to present or present to past. However, these symmetries are broken in distinct ways by trends in the mean and variance. We define indices that capture this information and use them to detect weak trends from multiple time-series. Here, we use these methods to answer the following questions: (1) Is there a variability trend among globally distributed surface temperature time-series? We find a significant decreasing variability over the past century for the Global Historical Climatology Network (GHCN). This corresponds to about a 10% change in the standard deviation of inter-annual monthly mean temperature distributions. (2) How are record-breaking high and low surface temperatures in the United States affected by time period? We investigate the United States Historical Climatology Network (USHCN) and find that the ratio of record-breaking highs to lows in 2006 increases as the time-series extend further into the past. When we consider the ratio as it evolves with respect to a fixed start year, we find it is strongly correlated with the ensemble mean. We also compare the ratios for USHCN and GHCN (minus USHCN stations). We find the ratios grow monotonically in the GHCN data set, but not in the USHCN data set. (3) Do we detect either mean or variance trends in annual precipitation within the United States? We find that the total annual and monthly precipitation in the United States (USHCN) has increased over the past century. Evidence for a trend in variance is inconclusive.
Resumo:
The spatial context is critical when assessing present-day climate anomalies, attributing them to potential forcings and making statements regarding their frequency and severity in a long-term perspective. Recent international initiatives have expanded the number of high-quality proxy-records and developed new statistical reconstruction methods. These advances allow more rigorous regional past temperature reconstructions and, in turn, the possibility of evaluating climate models on policy-relevant, spatio-temporal scales. Here we provide a new proxy-based, annually-resolved, spatial reconstruction of the European summer (June–August) temperature fields back to 755 CE based on Bayesian hierarchical modelling (BHM), together with estimates of the European mean temperature variation since 138 BCE based on BHM and composite-plus-scaling (CPS). Our reconstructions compare well with independent instrumental and proxy-based temperature estimates, but suggest a larger amplitude in summer temperature variability than previously reported. Both CPS and BHM reconstructions indicate that the mean 20th century European summer temperature was not significantly different from some earlier centuries, including the 1st, 2nd, 8th and 10th centuries CE. The 1st century (in BHM also the 10th century) may even have been slightly warmer than the 20th century, but the difference is not statistically significant. Comparing each 50 yr period with the 1951–2000 period reveals a similar pattern. Recent summers, however, have been unusually warm in the context of the last two millennia and there are no 30 yr periods in either reconstruction that exceed the mean average European summer temperature of the last 3 decades (1986–2015 CE). A comparison with an ensemble of climate model simulations suggests that the reconstructed European summer temperature variability over the period 850–2000 CE reflects changes in both internal variability and external forcing on multi-decadal time-scales. For pan-European temperatures we find slightly better agreement between the reconstruction and the model simulations with high-end estimates for total solar irradiance. Temperature differences between the medieval period, the recent period and the Little Ice Age are larger in the reconstructions than the simulations. This may indicate inflated variability of the reconstructions, a lack of sensitivity and processes to changes in external forcing on the simulated European climate and/or an underestimation of internal variability on centennial and longer time scales.
Resumo:
Subpolar regions are key areas to study natural climate variability, due to their high sensitivity to rapid environmental changes, particularly through sea surface temperature (SST) variations. Here, we have tested three independent organic temperature proxies (UK'37, TEX86 and LDI) on their potential applicability for SST reconstruction in the subpolar region around Iceland. UK'37, TEX86 and TEXL86 temperature estimates from suspended particulate matter showed a substantial discrepancy with instrumental data, while long chain alkyl diols were below detection limit in most of the stations. In the northern Iceland Basin, sedimenting particles revealed a seasonality in lipid fluxes i.e. high fluxes of alkenones and GDGTs were measured during late spring-summer, and high fluxes of long chain alkyl diols during late summer. The flux-weighted average temperature estimates had a significant negative (ca. 2.3°C for UK'37) and positive (up to 5°C for TEX86) offset with satellite-derived SSTs and temperature estimates derived from the underlying surface sediment. UK'37 temperature estimates from surface sediments around Iceland correlate well with summer mean sea surface temperatures, while TEX86 derived temperatures correspond with both annual and winter mean 0-200 m temperatures, suggesting a subsurface temperature signal. Anomalous LDI-SST values in surface sediments, and low mass flux of 1,13- and 1,15-diols compared to 1,14-diols, suggest that Proboscia diatoms are the major sources of long chain alkyl diols in this area rather than eustigmatophyte algae, and therefore the LDI cannot be applied in this region.
Resumo:
An impedance-based midspan debonding identification method for RC beams strengthened with FRP strips is presented in this paper using piezoelectric ceramic (PZT) sensor?actuators. To reach this purpose, firstly, a two-dimensional electromechanical impedance model is proposed to predict the electrical admittance of the PZT transducer bonded to the FRP strips of an RC beam. Considering the impedance is measured in high frequencies, a spectral element model of the bonded-PZT?FRP strengthened beam is developed. This model, in conjunction with experimental measurements of PZT transducers, is used to present an updating methodology to quantitatively detect interfacial debonding of these kinds of structures. To improve the performance and accuracy of the detection algorithm in a challenging problem such as ours, the structural health monitoring approach is solved with an ensemble process based on particle of swarm. An adaptive mesh scheme has also been developed to increase the reliability in locating the area in which debonding initiates. Predictions carried out with experimental results have showed the effectiveness and potential of the proposed method to detect prematurely at its earliest stages a critical failure mode such as that due to midspan debonding of the FRP strip.
Resumo:
Many effectors of microtubule assembly in vitro enhance the polymerization of subunits. However, several Saccharomyces cerevisiae genes that affect cellular microtubule-dependent processes appear to act at other steps in assembly and to affect polymerization only indirectly. Here we use a mutant α-tubulin to probe cellular regulation of microtubule assembly. tub1-724 mutant cells arrest at low temperature with no assembled microtubules. The results of several assays reported here demonstrate that the heterodimer formed between Tub1-724p and β-tubulin is less stable than wild-type heterodimer. The unstable heterodimer explains several conditional phenotypes conferred by the mutation. These include the lethality of tub1-724 haploid cells when the β-tubulin–binding protein Rbl2p is either overexpressed or absent. It also explains why the TUB1/tub1-724 heterozygotes are cold sensitive for growth and why overexpression of Rbl2p rescues that conditional lethality. Both haploid and heterozygous tub1-724 cells are inviable when another microtubule effector, PAC2, is overexpressed. These effects are explained by the ability of Pac2p to bind α-tubulin, a complex we demonstrate directly. The results suggest that tubulin-binding proteins can participate in equilibria between the heterodimer and its components.
Resumo:
Optimism is growing that the near future will witness rapid growth in human-computer interaction using voice. System prototypes have recently been built that demonstrate speaker-independent real-time speech recognition, and understanding of naturally spoken utterances with vocabularies of 1000 to 2000 words, and larger. Already, computer manufacturers are building speech recognition subsystems into their new product lines. However, before this technology can be broadly useful, a substantial knowledge base is needed about human spoken language and performance during computer-based spoken interaction. This paper reviews application areas in which spoken interaction can play a significant role, assesses potential benefits of spoken interaction with machines, and compares voice with other modalities of human-computer interaction. It also discusses information that will be needed to build a firm empirical foundation for the design of future spoken and multimodal interfaces. Finally, it argues for a more systematic and scientific approach to investigating spoken input and performance with future language technology.
Resumo:
Pl. no. L.R.25.