977 resultados para working process
Resumo:
Includes bibliography
Resumo:
Tests on spatial aptitude, in particular Visualization, have been shown to be efficient predictors of the academic performance of Technical Drawing stu-dents. It has recently been found that Spatial Working Memory (a construct defined as the ability to perform tasks with a figurative content that require si-multaneous storage and transformation of information) is strongly associated with Visualization. In the present study we analyze the predictive efficiency of a bat-tery of tests that included tests on Visualization, SpatialWorking Memory, Spatial Short-term Memory and Executive Function on a sample of first year engineering students. The results show that Spatial Working Memory (SWM) is the most important predictor of academic success in Technical Drawing. In our view, SWM tests can be useful for detecting as early as possible those students who will require more attention and support in the teaching-learning process.
Resumo:
A visual methods study was conducted with 16 at-risk youth living in a mid-sized Brazilian city. In this study, we focus on data obtained from four of those youth who were working adolescents, aged 13-15, and identify contextually specific protective processes associated with resilience. Through a reciprocal process of collaborative research that included observation, photo elicitation, video recording of a 'day in the life' of each youth, and semi-structured interviews, youth and researchers co-constructed an understanding of adaptive coping in a particularly challenging social environment. By employing techniques from grounded theory to analyze the data, we identified a pattern of protagonism among these youth that enabled them to maintain well-being despite exploitation as working children. This conceptualization of protagonism as a protective process has implications for human service workers who intervene to improve the living conditions of working children. © 2013 Taylor & Francis.
Resumo:
In such territories where food production is mostly scattered in several small / medium size or even domestic farms, a lot of heterogeneous residues are produced yearly, since farmers usually carry out different activities in their properties. The amount and composition of farm residues, therefore, widely change during year, according to the single production process periodically achieved. Coupling high efficiency micro-cogeneration energy units with easy handling biomass conversion equipments, suitable to treat different materials, would provide many important advantages to the farmers and to the community as well, so that the increase in feedstock flexibility of gasification units is nowadays seen as a further paramount step towards their wide spreading in rural areas and as a real necessity for their utilization at small scale. Two main research topics were thought to be of main concern at this purpose, and they were therefore discussed in this work: the investigation of fuels properties impact on gasification process development and the technical feasibility of small scale gasification units integration with cogeneration systems. According to these two main aspects, the present work was thus divided in two main parts. The first one is focused on the biomass gasification process, that was investigated in its theoretical aspects and then analytically modelled in order to simulate thermo-chemical conversion of different biomass fuels, such as wood (park waste wood and softwood), wheat straw, sewage sludge and refuse derived fuels. The main idea is to correlate the results of reactor design procedures with the physical properties of biomasses and the corresponding working conditions of gasifiers (temperature profile, above all), in order to point out the main differences which prevent the use of the same conversion unit for different materials. At this scope, a gasification kinetic free model was initially developed in Excel sheets, considering different values of air to biomass ratio and the downdraft gasification technology as particular examined application. The differences in syngas production and working conditions (process temperatures, above all) among the considered fuels were tried to be connected to some biomass properties, such elementary composition, ash and water contents. The novelty of this analytical approach was the use of kinetic constants ratio in order to determine oxygen distribution among the different oxidation reactions (regarding volatile matter only) while equilibrium of water gas shift reaction was considered in gasification zone, by which the energy and mass balances involved in the process algorithm were linked together, as well. Moreover, the main advantage of this analytical tool is the easiness by which the input data corresponding to the particular biomass materials can be inserted into the model, so that a rapid evaluation on their own thermo-chemical conversion properties is possible to be obtained, mainly based on their chemical composition A good conformity of the model results with the other literature and experimental data was detected for almost all the considered materials (except for refuse derived fuels, because of their unfitting chemical composition with the model assumptions). Successively, a dimensioning procedure for open core downdraft gasifiers was set up, by the analysis on the fundamental thermo-physical and thermo-chemical mechanisms which are supposed to regulate the main solid conversion steps involved in the gasification process. Gasification units were schematically subdivided in four reaction zones, respectively corresponding to biomass heating, solids drying, pyrolysis and char gasification processes, and the time required for the full development of each of these steps was correlated to the kinetics rates (for pyrolysis and char gasification processes only) and to the heat and mass transfer phenomena from gas to solid phase. On the basis of this analysis and according to the kinetic free model results and biomass physical properties (particles size, above all) it was achieved that for all the considered materials char gasification step is kinetically limited and therefore temperature is the main working parameter controlling this step. Solids drying is mainly regulated by heat transfer from bulk gas to the inner layers of particles and the corresponding time especially depends on particle size. Biomass heating is almost totally achieved by the radiative heat transfer from the hot walls of reactor to the bed of material. For pyrolysis, instead, working temperature, particles size and the same nature of biomass (through its own pyrolysis heat) have all comparable weights on the process development, so that the corresponding time can be differently depending on one of these factors according to the particular fuel is gasified and the particular conditions are established inside the gasifier. The same analysis also led to the estimation of reaction zone volumes for each biomass fuel, so as a comparison among the dimensions of the differently fed gasification units was finally accomplished. Each biomass material showed a different volumes distribution, so that any dimensioned gasification unit does not seem to be suitable for more than one biomass species. Nevertheless, since reactors diameters were found out quite similar for all the examined materials, it could be envisaged to design a single units for all of them by adopting the largest diameter and by combining together the maximum heights of each reaction zone, as they were calculated for the different biomasses. A total height of gasifier as around 2400mm would be obtained in this case. Besides, by arranging air injecting nozzles at different levels along the reactor, gasification zone could be properly set up according to the particular material is in turn gasified. Finally, since gasification and pyrolysis times were found to considerably change according to even short temperature variations, it could be also envisaged to regulate air feeding rate for each gasified material (which process temperatures depend on), so as the available reactor volumes would be suitable for the complete development of solid conversion in each case, without even changing fluid dynamics behaviour of the unit as well as air/biomass ratio in noticeable measure. The second part of this work dealt with the gas cleaning systems to be adopted downstream the gasifiers in order to run high efficiency CHP units (i.e. internal engines and micro-turbines). Especially in the case multi–fuel gasifiers are assumed to be used, weightier gas cleaning lines need to be envisaged in order to reach the standard gas quality degree required to fuel cogeneration units. Indeed, as the more heterogeneous feed to the gasification unit, several contaminant species can simultaneously be present in the exit gas stream and, as a consequence, suitable gas cleaning systems have to be designed. In this work, an overall study on gas cleaning lines assessment is carried out. Differently from the other research efforts carried out in the same field, the main scope is to define general arrangements for gas cleaning lines suitable to remove several contaminants from the gas stream, independently on the feedstock material and the energy plant size The gas contaminant species taken into account in this analysis were: particulate, tars, sulphur (in H2S form), alkali metals, nitrogen (in NH3 form) and acid gases (in HCl form). For each of these species, alternative cleaning devices were designed according to three different plant sizes, respectively corresponding with 8Nm3/h, 125Nm3/h and 350Nm3/h gas flows. Their performances were examined on the basis of their optimal working conditions (efficiency, temperature and pressure drops, above all) and their own consumption of energy and materials. Successively, the designed units were combined together in different overall gas cleaning line arrangements, paths, by following some technical constraints which were mainly determined from the same performance analysis on the cleaning units and from the presumable synergic effects by contaminants on the right working of some of them (filters clogging, catalysts deactivation, etc.). One of the main issues to be stated in paths design accomplishment was the tars removal from the gas stream, preventing filters plugging and/or line pipes clogging At this scope, a catalytic tars cracking unit was envisaged as the only solution to be adopted, and, therefore, a catalytic material which is able to work at relatively low temperatures was chosen. Nevertheless, a rapid drop in tars cracking efficiency was also estimated for this same material, so that an high frequency of catalysts regeneration and a consequent relevant air consumption for this operation were calculated in all of the cases. Other difficulties had to be overcome in the abatement of alkali metals, which condense at temperatures lower than tars, but they also need to be removed in the first sections of gas cleaning line in order to avoid corrosion of materials. In this case a dry scrubber technology was envisaged, by using the same fine particles filter units and by choosing for them corrosion resistant materials, like ceramic ones. Besides these two solutions which seem to be unavoidable in gas cleaning line design, high temperature gas cleaning lines were not possible to be achieved for the two larger plant sizes, as well. Indeed, as the use of temperature control devices was precluded in the adopted design procedure, ammonia partial oxidation units (as the only considered methods for the abatement of ammonia at high temperature) were not suitable for the large scale units, because of the high increase of reactors temperature by the exothermic reactions involved in the process. In spite of these limitations, yet, overall arrangements for each considered plant size were finally designed, so that the possibility to clean the gas up to the required standard degree was technically demonstrated, even in the case several contaminants are simultaneously present in the gas stream. Moreover, all the possible paths defined for the different plant sizes were compared each others on the basis of some defined operational parameters, among which total pressure drops, total energy losses, number of units and secondary materials consumption. On the basis of this analysis, dry gas cleaning methods proved preferable to the ones including water scrubber technology in al of the cases, especially because of the high water consumption provided by water scrubber units in ammonia adsorption process. This result is yet connected to the possibility to use activated carbon units for ammonia removal and Nahcolite adsorber for chloride acid. The very high efficiency of this latter material is also remarkable. Finally, as an estimation of the overall energy loss pertaining the gas cleaning process, the total enthalpy losses estimated for the three plant sizes were compared with the respective gas streams energy contents, these latter obtained on the basis of low heating value of gas only. This overall study on gas cleaning systems is thus proposed as an analytical tool by which different gas cleaning line configurations can be evaluated, according to the particular practical application they are adopted for and the size of cogeneration unit they are connected to.
Resumo:
With the business environments no longer confined to geographical borders, the new wave of digital technologies has given organizations an enormous opportunity to bring together their distributed workforce and develop the ability to work together despite being apart (Prasad & Akhilesh, 2002). resupposing creativity to be a social process, the way that this phenomenon occurs when the configuration of the team is substantially modified will be questioned. Very little is known about the impact of interpersonal relationships in the creativity (Kurtzberg & Amabile, 2001). In order to analyse the ways in which the creative process may be developed, we ought to be taken into consideration the fact that participants are dealing with a quite an atypical situation. Firstly, in these cases socialization takes place amongst individuals belonging to a geographically dispersed workplace, where interpersonal relationships are mediated by the computer, and where trust must be developed among persons who have never met one another. Participants not only have multiple addresses and locations, but above all different nationalities, and different cultures, attitudes, thoughts, and working patterns, and languages. Therefore, the central research question of this thesis is as follows: “How does the creative process unfold in globally distributed teams?” With a qualitative approach, we used the case study of the Business Unit of Volvo 3P, an arm of Volvo Group. Throughout this research, we interviewed seven teams engaged in the development of a new product in the chassis and cab areas, for the brands Volvo and Renault Trucks, teams that were geographically distributed in Brazil, Sweden, France and India. Our research suggests that corporate values, alongside with intrinsic motivation and task which lay down the necessary foundations for the development of the creative process in GDT.
Resumo:
Global economic changes have psychological consequences and Mr. Lepeska set out to assess these changes in working adults in Lithuania between 1993 and 1997. He surveyed two groups of working adults, with a total of 200 people, randomly selected and representing different organisations and professions. In both groups around 30% of participants were managers, with the remainder working in non-managerial positions. The participants were surveyed twice, once in 1993 and the second time in 1997,using various psychodiagnostic tools to measure their psychological characteristics. The results showed that strategies for coping with stress have changed, with problem solving strategies being used more often, and avoidance behaviour or seeking social support less. Men tended to have rejected these strategy more radically than women. Attitudes towards work had become more positive, with managers' attitudes having changed more significantly than those of employees from lower levels of organisations. Younger people were more positive towards work-related changes, while situational anxiety tended to increase with age, although overall it remained low. Mr. Lepeska found that while there were some indications of an increasing individualist in relation to peers, the traditional collective orientation of Lithuanian adults had if anything increased. People have become more accepting of an unequal distribution of power, making it difficult to increase the participation of subordinates in decision making. He also noted a tendency for Lithuanians to see their organisations as traditional families, expecting them to take care of them physically and economically in return for loyalty. The strong feminine orientation with its stress on interpersonal relations and overall quality of life has also strengthened, but the ability of Lithuanians to take initiative and control their environment was relatively low. Mr. Lepeska concludes that organisations should seek to recruit people who are able to adjust more easily to changes and consider measuring dominance, individualism, and attitudes to work-related change and situational anxiety in the process of professional selection. There should also be more emphasis on team building and on training managers to maintain closer relationships with their subordinates so as to increase the latter's participation in decision making. Good interpersonal relations can be a strong work motivator, as may be special attention to the security needs of older employees.
Resumo:
The AEGISS (Ascertainment and Enhancement of Gastrointestinal Infection Surveillance and Statistics) project aims to use spatio-temporal statistical methods to identify anomalies in the space-time distribution of non-specific, gastrointestinal infections in the UK, using the Southampton area in southern England as a test-case. In this paper, we use the AEGISS project to illustrate how spatio-temporal point process methodology can be used in the development of a rapid-response, spatial surveillance system. Current surveillance of gastroenteric disease in the UK relies on general practitioners reporting cases of suspected food-poisoning through a statutory notification scheme, voluntary laboratory reports of the isolation of gastrointestinal pathogens and standard reports of general outbreaks of infectious intestinal disease by public health and environmental health authorities. However, most statutory notifications are made only after a laboratory reports the isolation of a gastrointestinal pathogen. As a result, detection is delayed and the ability to react to an emerging outbreak is reduced. For more detailed discussion, see Diggle et al. (2003). A new and potentially valuable source of data on the incidence of non-specific gastro-enteric infections in the UK is NHS Direct, a 24-hour phone-in clinical advice service. NHS Direct data are less likely than reports by general practitioners to suffer from spatially and temporally localized inconsistencies in reporting rates. Also, reporting delays by patients are likely to be reduced, as no appointments are needed. Against this, NHS Direct data sacrifice specificity. Each call to NHS Direct is classified only according to the general pattern of reported symptoms (Cooper et al, 2003). The current paper focuses on the use of spatio-temporal statistical analysis for early detection of unexplained variation in the spatio-temporal incidence of non-specific gastroenteric symptoms, as reported to NHS Direct. Section 2 describes our statistical formulation of this problem, the nature of the available data and our approach to predictive inference. Section 3 describes the stochastic model. Section 4 gives the results of fitting the model to NHS Direct data. Section 5 shows how the model is used for spatio-temporal prediction. The paper concludes with a short discussion.
Resumo:
Writing centers work with writers; traditionally services have been focused on undergraduates taking composition classes. More recently, centers have started to attract a wider client base including: students taking labs that require writing; graduate students; and ESL students learning the conventions of U.S. communication. There are very few centers, however, which identify themselves as open to working with all members of the campus-community. Michigan Technological University has one such center. In the Michigan Tech writing center, doors are open to “all students, faculty and staff.” While graduate students, post docs, and professors preparing articles for publication have used the center, for the first time in the collective memory of the center UAW staff members requested center appointments in the summer of 2008. These working class employees were in the process of filling out a work related document, the UAW Position Audit, an approximately seven-page form. This form was their one avenue for requesting a review of the job they were doing; the review was the first step in requesting a raise in job level and pay. This study grew out of the realization that implicit literacy expectations between working class United Auto Workers (UAW) staff and professional class staff were complicating the filling out and filing of the position audit form. Professional class supervisors had designed the form as a measure of fairness, in that each UAW employee on campus was responding to the same set of questions about their work. However, the implicit literacy expectations of supervisors were different from those of many of the employees who were to fill out the form. As a result, questions that were meant to be straightforward to answer were in the eyes of the employees filling out the form, complex. Before coming to the writing center UAW staff had spent months writing out responses to the form; they expressed concerns that their responses still would not meet audience expectations. These writers recognized that they did not yet know exactly what the audience was expecting. The results of this study include a framework for planning writing center sessions that facilitate the acquisition of literacy practices which are new to the user. One important realization from this dissertation is that the social nature of literacy must be kept in the forefront when both planning sessions and when educating tutors to lead these sessions. Literacy scholars such as James Paul Gee, Brian Street, and Shirley Brice Heath are used to show that a person can only know those literacy practices that they have previously acquired. In order to acquire new literacy practices, a person must have social opportunities for hands-on practice and mentoring from someone with experience. The writing center can adapt theory and practices from this dissertation that will facilitate sessions for a range of writers wishing to learn “new” literacy practices. This study also calls for specific changes to writing center tutor education.
Resumo:
BACKGROUND: Elevated plasma fibrinogen levels have prospectively been associated with an increased risk of coronary artery disease in different populations. Plasma fibrinogen is a measure of systemic inflammation crucially involved in atherosclerosis. The vagus nerve curtails inflammation via a cholinergic antiinflammatory pathway. We hypothesized that lower vagal control of the heart relates to higher plasma fibrinogen levels. METHODS: Study participants were 559 employees (age 17-63 years; 89% men) of an airplane manufacturing plant in southern Germany. All subjects underwent medical examination, blood sampling, and 24-hour ambulatory heart rate recording while kept on their work routine. The root mean square of successive differences in RR intervals during the night period (nighttime RMSSD) was computed as the heart rate variability index of vagal function. RESULTS: After controlling for demographic, lifestyle, and medical factors, nighttime RMSSD explained 1.7% (P = 0.001), 0.8% (P = 0.033), and 7.8% (P = 0.007), respectively, of the variance in fibrinogen levels in all subjects, men, and women. Nighttime RMSSD and fibrinogen levels were stronger correlated in women than in men. In all workers, men, and women, respectively, there was a mean +/- SEM increase of 0.41 +/- 0.13 mg/dL, 0.28 +/- 0.13 mg/dL, and 1.16 +/- 0.41 mg/dL fibrinogen for each millisecond decrease in nighttime RMSSD. CONCLUSIONS: Reduced vagal outflow to the heart correlated with elevated plasma fibrinogen levels independent of the established cardiovascular risk factors. This relationship seemed comparably stronger in women than men. Such an autonomic mechanism might contribute to the atherosclerotic process and its thrombotic complications.
Resumo:
Stress is a strong modulator of memory function. However, memory is not a unitary process and stress seems to exert different effects depending on the memory type under study. Here, we explored the impact of social stress on different aspects of human memory, including tests for explicit memory and working memory (for neutral materials), as well as implicit memory (perceptual priming, contextual priming and classical conditioning for emotional stimuli). A total of 35 young adult male students were randomly assigned to either the stress or the control group, with stress being induced by the Trier Social Stress Test (TSST). Salivary cortisol levels were assessed repeatedly throughout the experiment to validate stress effects. The results support previous evidence indicating complex effects of stress on different types of memory: A pronounced working memory deficit was associated with exposure to stress. No performance differences between groups of stressed and unstressed subjects were observed in verbal explicit memory (but note that learning and recall took place within 1 h and immediately following stress) or in implicit memory for neutral stimuli. Stress enhanced classical conditioning for negative but not positive stimuli. In addition, stress improved spatial explicit memory. These results reinforce the view that acute stress can be highly disruptive for working memory processing. They provide new evidence for the facilitating effects of stress on implicit memory for negative emotional materials. Our findings are discussed with respect to their potential relevance for psychiatric disorders, such as post traumatic stress disorder.
Resumo:
The Loss, grief and other problems are events that most of people experience them during their Life. The earthquake is a disaster that makes people experience loss, grief and problems simultaneously. This crisis affects on survivors as much as they face to dangers in their lives. Thus, most of them need to being supported until they can solve their problems, be relaxed and do their daily activities. We know that the profession of social workers is to assist individuals who are seeking help. But there is a Problem, how do they help the clients efficiently? Especially, those clients who have suffered earthquake. Generally, the role of social workers in helping the survivors of earthquake is significant. To this end, the present paper tries to describe the process of social casework and those skills required for social workers to help the survivors. These skills include: situational supporting, hopefulness making, consoling, assuring, concentrating, solutions developing and refer.
Resumo:
A striking feature of virtually al western industrialized countries sice the middle of the past century has been the persistent growth of their government sector. From the beginning of the century to the late 1970's, the government expenditures' share of gross national product has increased from 7% to 36% in the U.S., 11% to 40% in the U.K., and 3% to 25% in Japan. In Germany, it went from 7% to 42% (1872-1978), while in France it soared from 11% to 59% (1872-1979). The evolution of the number of government employees followed a similar pattern. In the U.S., for instance, the average annual rate of growth of the government labor force over the period 1899-1974 has been 3.17%, compared to a 1.62% average annual growth rate of the working population. Less quantifiable aspects like the number and scope of regulations also refelct a growing public sector.
Resumo:
While the pathology peer review/pathology working group (PWG) model has long been used in mammalian toxicologic pathology to ensure the accuracy, consistency, and objectivity of histopathology data, application of this paradigm to ecotoxicological studies has thus far been limited. In the current project, the PWG approach was used to evaluate histopathologic sections of gills, liver, kidney, and/or intestines from three previously published studies of diclofenac in trout, among which there was substantial variation in the reported histopathologic findings. The main objectives of this review process were to investigate and potentially reconcile these interstudy differences, and based on the results, to establish an appropriate no observed effect concentration (NOEC). Following a complete examination of all histologic sections and original diagnoses by a single experienced fish pathologist (pathology peer review), a two-day PWG session was conducted to allow members of a four-person expert panel to determine the extent of treatment-related findings in each of the three trout studies. The PWG was performed according to the United States Environmental Protection Agency (US EPA) Pesticide Regulation (PR) 94-5 (EPA Pesticide Regulation, 1994). In accordance with standard procedures, the PWG review was conducted by the non-voting chairperson in a manner intended to minimize bias, and thus during the evaluation, the four voting panelists were unaware of the treatment group status of individual fish and the original diagnoses associated with the histologic sections. Based on the results of this review, findings related to diclofenac exposure included minimal to slightly increased thickening of the gill filament tips in fish exposed to the highest concentration tested (1,000 μg/L), plus a previously undiagnosed finding, decreased hepatic glycogen, which also occurred at the 1,000 μg/L dose level. The panel found little evidence to support other reported effects of diclofenac in trout, and thus the overall NOEC was determined to be >320 μg/L. By consensus, the PWG panel was able to identify diagnostic inconsistencies among and within the three prior studies; therefore this exercise demonstrated the value of the pathology peer review/PWG approach for assessing the reliability of histopathology results that may be used by regulatory agencies for risk assessment.
Resumo:
The goal of the present article is to introduce dual-process theories – in particular the default-interventionist model – as an overarching framework for attention-related research in sports. Dual-process theories propose that two different types of processing guide human behavior. Type 1 processing is independent of available working memory capacity (WMC), whereas Type 2 processing depends on available working memory capacity. We review the latest theoretical developments on dual-process theories and present evidence for the validity of dual-process theories from various domains. We demonstrate how existing sport psychology findings can be integrated within the dual-process framework. We illustrate how future sport psychology research might benefit from adopting the dual-process framework as a meta-theoretical framework by arguing that the complex interplay between Type 1 and Type 2 processing has to be taken into account in order to gain a more complete understanding of the dynamic nature of attentional processing during sport performance at varying levels of expertise. Finally, we demonstrate that sport psychology applications might benefit from the dual-process perspective as well: dual-process theories are able to predict which behaviors can be more successfully executed when relying on Type 1 processing and which behaviors benefit from Type 2 processing.
Resumo:
Objective: Since 2011, the new national final examination in human medicine has been implemented in Switzerland, with a structured clinical-practical part in the OSCE format. From the perspective of the national Working Group, the current article describes the essential steps in the development, implementation and evaluation of the Federal Licensing Examination Clinical Skills (FLE CS) as well as the applied quality assurance measures. Finally, central insights gained from the last years are presented. Methods: Based on the principles of action research, the FLE CS is in a constant state of further development. On the foundation of systematically documented experiences from previous years, in the Working Group, unresolved questions are discussed and resulting solution approaches are substantiated (planning), implemented in the examination (implementation) and subsequently evaluated (reflection). The presented results are the product of this iterative procedure. Results: The FLE CS is created by experts from all faculties and subject areas in a multistage process. The examination is administered in German and French on a decentralised basis and consists of twelve interdisciplinary stations per candidate. As important quality assurance measures, the national Review Board (content validation) and the meetings of the standardised patient trainers (standardisation) have proven worthwhile. The statistical analyses show good measurement reliability and support the construct validity of the examination. Among the central insights of the past years, it has been established that the consistent implementation of the principles of action research contributes to the successful further development of the examination. Conclusion: The centrally coordinated, collaborative-iterative process, incorporating experts from all faculties, makes a fundamental contribution to the quality of the FLE CS. The processes and insights presented here can be useful for others planning a similar undertaking. Keywords: national final examination, licensing examination, summative assessment, OSCE, action research