7 resultados para Gendered practices in working life
em Biblioteca Digital da Produção Intelectual da Universidade de São Paulo
Resumo:
The construction industry is one of the greatest sources of pollution because of the high level of energy consumption during its life cycle. In addition to using energy while constructing a building, several systems also use power while the building is operating, especially the air-conditioning system. Energy consumption for this system is related, among other issues, to external air temperature and the required internal temperature of the building. The facades are elements which present the highest level of ambient heat transfer from the outside to the inside of tall buildings. Thus, the type of facade has an influence on energy consumption during the building life cycle and, consequently, contributes to buildings' CO2 emissions, because these emissions are directly connected to energy consumption. Therefore, the aim is to help develop a methodology for evaluating CO2 emissions generated during the life cycle of office building facades. The results, based on the parameters used in this study, show that facades using structural glazing and uncolored glass emit the most CO2 throughout their life cycle, followed by brick facades covered with compound aluminum panels or ACM (Aluminum Composite Material), facades using structural glazing and reflective glass and brick facades with plaster coating. On the other hand, the typology of facade that emits less CO2 is brickwork and mortar because its thermal barrier is better than structural glazing facade and materials used to produce this facade are better than brickwork and ACM. Finally, an uncertainty analysis was conducted to verify the accuracy of the results attained. (C) 2011 Elsevier Inc. All rights reserved.
Resumo:
This study reports the implementation of GMPs in a mozzarella cheese processing plant. The mozzarella cheese manufacturing unit is located in the Southwestern region of the state of Parana, Brazil, and processes 20,000 L of milk daily. The implementation of GMP took place with the creation of a multi-disciplinary team and it was carried out in four steps: diagnosis, report of the diagnosis and road map, corrective measures and follow-up of GMP implementation. The effectiveness of actions taken and GMP implementation was compared by the total percentages of non-conformities and conformities before and after implementation of GMR Microbiological indicators were also used to assess the implementation of GMP in the mozzarella cheese processing facility. Results showed that the average percentage of conformity after the implementation of GMP was significant increased to 66%, while before it was 32% (p < 0.05). The populations of aerobic microorganisms and total coliforms in equipment were significantly reduced (p < 0.05) after the implementation of GMP, as well as the populations of total coliforms in the hands of food handlers (p < 0.05). In conclusion, GMP implementation changed the overall organization of the cheese processing unity, as well as managers and food handlers' behavior and knowledge on the quality and safety of products manufactured. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
Background: The Glial Cell-line derived neurotrophic factor (GDNF) is part of the TGF-beta superfamily and is abundantly expressed in the central nervous system. Changes in GDNF homeostasis have been reported in affective disorders. Aim: To assess serum GDNF concentration in elderly subjects with late-life depression, before antidepressant treatment, as compared to healthy elderly controls. Methods: Thirty-four elderly subjects with major depression and 37 age and gender-matched healthy elderly controls were included in this study. Diagnosis of major depression was ascertained by the SCID interview for DSM-IV and the severity of depressive symptoms was assessed by the Hamilton Depression Rating Scale (HDRS-21). Serum GDNF concentration were determined by sandwich ELISA. Results: Patients with major depression showed a significant reduction in GDNF levels as compared to healthy elderly controls (p < 0.001). Also, GDNF level was negatively correlated with HDRS-21 scores (r = -0.343, p = 0.003). Discussion: Our data provide evidence that GDNF may be a state marker of depressive episode in older adults. Changes in the homeostatic control of GDNF production may be a target to development of new antidepressant strategies. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
Background: Although hospitalization is recognized as an important cause of reduction in physical activity in daily life (PADL) in COPD, there is only one study evaluating this effect, and it was performed in European COPD patients who have a lower PADL than that of South American COPD patients. Objectives: To investigate the effect of hospitalization due to acute exacerbation of PADL in Brazilian COPD patients and to evaluate the factors that determines the physical activity levels during hospitalization and after discharge. Methods: PADL was quantified using a 3-axis accelerometer on the 3rd day of hospitalization and 1 month after discharge in Brazilian COPD patients who were hospitalized due to disease exacerbation. Six-minute walking distance (6MWD), lower limb strength and pulmonary function were also evaluated. Results: A total of 20 patients completed the study. During hospitalization, patients spent most of the time (87%) lying down or sitting; however, 1 month after they were walking >40 min/day. In addition, patients with prior hospitalization had a lower level of physical activity compared to those without a previous history of hospitalization. The time spent walking during hospitalization was significantly explained by the quadriceps strength (r(2) = 0.29; p < 0.05), while 1 month after, the time spent walking was only significantly explained by the 6MWD (r(2) = 0.51; p = 0.02). Conclusions: Brazilian COPD patients are inactive during hospitalization but become active 1 month after discharge. Previously hospitalized are more inactive both during and after exacerbation. The quadriceps strength and 6MWD explain the physical activity levels during hospitalization and at home, respectively.
Resumo:
OBJECTIVE: Urinary lithiasis is a common disease. The aim of the present study is to assess the knowledge regarding the diagnosis, treatment and recommendations given to patients with ureteral colic by professionals of an academic hospital. MATERIALS AND METHODS: Sixty-five physicians were interviewed about previous experience with guidelines regarding ureteral colic and how they manage patients with ureteral colic in regards to diagnosis, treatment and the information provided to the patients. RESULTS: Thirty-six percent of the interviewed physicians were surgeons, and 64% were clinicians. Forty-one percent of the physicians reported experience with ureterolithiasis guidelines. Seventy-two percent indicated that they use noncontrast CT scans for the diagnosis of lithiasis. All of the respondents prescribe hydration, primarily for the improvement of stone elimination (39.3%). The average number of drugs used was 3.5. The combination of nonsteroidal anti-inflammatory drugs and opioids was reported by 54% of the physicians (i. e., 59% of surgeons and 25.6% of clinicians used this combination of drugs) (p = 0.014). Only 21.3% prescribe alpha blockers. CONCLUSION: Reported experience with guidelines had little impact on several habitual practices. For example, only 21.3% of the respondents indicated that they prescribed alpha blockers; however, alpha blockers may increase stone elimination by up to 54%. Furthermore, although a meta-analysis demonstrated that hydration had no effect on the transit time of the stone or on the pain, the majority of the physicians reported that they prescribed more than 500 ml of fluid. Dipyrone, hyoscine, nonsteroidal anti-inflammatory drugs, and opioids were identified as the most frequently prescribed drug combination. The information regarding the time for the passage of urinary stones was inconsistent. The development of continuing education programs regarding ureteral colic in the emergency room is necessary.
Resumo:
The innate and adaptive immune responses in neonates are usually functionally impaired when compared with their adult counterparts. The qualitative and quantitative differences in the neonatal immune response put them at risk for the development of bacterial and viral infections, resulting in increased mortality. Newborns often exhibit decreased production of Th1-polarizing cytokines and are biased toward Th2-type responses. Studies aimed at understanding the plasticity of the immune response in the neonatal and early infant periods or that seek to improve neonatal innate immune function with adjuvants or special formulations are crucial for preventing the infectious disease burden in this susceptible group. Considerable studies focused on identifying potential immunomodulatory therapies have been performed in murine models. This article highlights the strategies used in the emerging field of immunomodulation in bacterial and viral pathogens, focusing on preclinical studies carried out in animal models with particular emphasis on neonatal-specific immune deficits.
Resumo:
Background The evolutionary advantages of selective attention are unclear. Since the study of selective attention began, it has been suggested that the nervous system only processes the most relevant stimuli because of its limited capacity [1]. An alternative proposal is that action planning requires the inhibition of irrelevant stimuli, which forces the nervous system to limit its processing [2]. An evolutionary approach might provide additional clues to clarify the role of selective attention. Methods We developed Artificial Life simulations wherein animals were repeatedly presented two objects, "left" and "right", each of which could be "food" or "non-food." The animals' neural networks (multilayer perceptrons) had two input nodes, one for each object, and two output nodes to determine if the animal ate each of the objects. The neural networks also had a variable number of hidden nodes, which determined whether or not it had enough capacity to process both stimuli (Table 1). The evolutionary relevance of the left and the right food objects could also vary depending on how much the animal's fitness was increased when ingesting them (Table 1). We compared sensory processing in animals with or without limited capacity, which evolved in simulations in which the objects had the same or different relevances. Table 1. Nine sets of simulations were performed, varying the values of food objects and the number of hidden nodes in the neural networks. The values of left and right food were swapped during the second half of the simulations. Non-food objects were always worth -3. The evolution of neural networks was simulated by a simple genetic algorithm. Fitness was a function of the number of food and non-food objects each animal ate and the chromosomes determined the node biases and synaptic weights. During each simulation, 10 populations of 20 individuals each evolved in parallel for 20,000 generations, then the relevance of food objects was swapped and the simulation was run again for another 20,000 generations. The neural networks were evaluated by their ability to identify the two objects correctly. The detectability (d') for the left and the right objects was calculated using Signal Detection Theory [3]. Results and conclusion When both stimuli were equally relevant, networks with two hidden nodes only processed one stimulus and ignored the other. With four or eight hidden nodes, they could correctly identify both stimuli. When the stimuli had different relevances, the d' for the most relevant stimulus was higher than the d' for the least relevant stimulus, even when the networks had four or eight hidden nodes. We conclude that selection mechanisms arose in our simulations depending not only on the size of the neuron networks but also on the stimuli's relevance for action.