863 resultados para Job Demands-Resources Model
Resumo:
This paper presents the conceptualization and use of a virtual classroom in the course EIF-200 Fundamentos de Informática, first course in the Information Systems Engineering career of the Universidad Nacional of Costa Rica. The virtual classroom is seen as a complement to the class and is conceived as a space that allows to centralize teaching resources, thereby promoting the democratization of knowledge among students and teachers. Furthermore, this concept of virtual classroom helps to reduce the culture of individualism, present many times in university teaching practices, and contributes to create new opportunities to learn from other colleagues within a culture of reflection, analysis and respectful dialogue aimed to improve the teaching practices.
Resumo:
The ability to predict the properties of magnetic materials in a device is essential to ensuring the correct operation and optimization of the design as well as the device behavior over a wide range of input frequencies. Typically, development and simulation of wide-bandwidth models requires detailed, physics-based simulations that utilize significant computational resources. Balancing the trade-offs between model computational overhead and accuracy can be cumbersome, especially when the nonlinear effects of saturation and hysteresis are included in the model. This study focuses on the development of a system for analyzing magnetic devices in cases where model accuracy and computational intensity must be carefully and easily balanced by the engineer. A method for adjusting model complexity and corresponding level of detail while incorporating the nonlinear effects of hysteresis is presented that builds upon recent work in loss analysis and magnetic equivalent circuit (MEC) modeling. The approach utilizes MEC models in conjunction with linearization and model-order reduction techniques to process magnetic devices based on geometry and core type. The validity of steady-state permeability approximations is also discussed.
Resumo:
The Neolithic was marked by a transition from small and relatively egalitarian groups, to much larger groups with increased stratification. But the dynamics of this remain poorly understood. It is hard to see how despotism can arise without coercion, yet coercion could not easily have occurred in an egalitarian setting. Using a quanti- tative model of evolution in a patch-structured population, we demonstrate that the interaction between demographic and ecological factors can overcome this conundrum. We model the co-evolution of individual preferences for hierarchy alongside the degree of despotism of leaders, and the dispersal preferences of followers. We show that voluntary leadership without coercion can evolve in small groups, when leaders help to solve coordination problems related to resource production. An example is coordinating construction of an irrigation system. Our model predicts that the transition to larger despotic groups will then occur when: 1. surplus resources lead to demographic expansion of groups, removing the viability of an acephalous niche in the same area and so locking individuals into hierarchy; 2. high dispersal costs limit followers' ability to escape a despot. Empirical evidence suggests that these conditions were likely met for the first time during the subsistence intensification of the Neolithic.
Resumo:
The paper presents a critical analysis of the extant literature pertaining to the networking behaviours of young jobseekers in both offline and online environments. A framework derived from information behaviour theory is proposed as a basis for conducting further research in this area. Method. Relevant material for the review was sourced from key research domains such as library and information science, job search research, and organisational research. Analysis. Three key research themes emerged from the analysis of the literature: (1) social networks, and the use of informal channels of information during job search, (2) the role of networking behaviours in job search, and (3) the adoption of social media tools. Tom Wilson’s general model of information behaviour was also identified as a suitable framework to conduct further research. Results. Social networks have a crucial informational utility during the job search process. However, the processes whereby young jobseekers engage in networking behaviours, both offline and online, remain largely unexplored. Conclusion. Identification and analysis of the key research themes reveal opportunities to acquire further knowledge regarding the networking behaviours of young jobseekers. Wilson’s model can be used as a framework to provide a holistic understanding of the networking process, from an information behaviour perspective.
Resumo:
Tässä diplomityössä tarkastellaan täysin uusiutuvaa energiajärjestelmää Etelä-Karjalan maakunnan alueella, mikä onkin jo tällä hetkellä Suomen uusiutuvin maakunta. Diplomityössä tarkastellaan julkisen sektorin, liikenteen ja rakennusten energian kulutusta mutta teollisuuden energiankäyttö jätetään tarkastelun ulkopuolelle. Työssä tutustutaan tämän hetken Etelä-Karjalan energiajärjestelmään ja sen perusteella tehdään referenssi-skenaario. Tulevaisuuden skenaariot tehdään vuosille 2030 ja 2050. Tulevaisuuden skenaarioissa muutos keskittyy järjestelmän sähköistymiseen ja uusiutuvien tuotantomuotojen integroimiseen järjestelmään. Sähköistyminen kasvattaa sähkönkulutusta, joka pyritään kattamaan uusiutuvilla tuotantomuodoilla, lähinnä tuuli- ja aurinkovoimalla. Liikennesektori rajataan kumipyöräliikenteeseen ja sen muutos tulee olemaan haastavin ja aikaa vievin. Muutokseen pyritään liikennepolttoaineiden tuotannolla maakunnassa sekä sähköautoilulla. Uusiutuva energiajärjestelmä tarvitsee tuotannon ja kysynnän joustoa sekä älyä järjestelmältä. Työssä tarkastellaan myös järjestelmän kustannuksia sekä työllisyysvaikutuksia.
Resumo:
One challenge on data assimilation (DA) methods is how the error covariance for the model state is computed. Ensemble methods have been proposed for producing error covariance estimates, as error is propagated in time using the non-linear model. Variational methods, on the other hand, use the concepts of control theory, whereby the state estimate is optimized from both the background and the measurements. Numerical optimization schemes are applied which solve the problem of memory storage and huge matrix inversion needed by classical Kalman filter methods. Variational Ensemble Kalman filter (VEnKF), as a method inspired the Variational Kalman Filter (VKF), enjoys the benefits from both ensemble methods and variational methods. It avoids filter inbreeding problems which emerge when the ensemble spread underestimates the true error covariance. In VEnKF this is tackled by resampling the ensemble every time measurements are available. One advantage of VEnKF over VKF is that it needs neither tangent linear code nor adjoint code. In this thesis, VEnKF has been applied to a two-dimensional shallow water model simulating a dam-break experiment. The model is a public code with water height measurements recorded in seven stations along the 21:2 m long 1:4 m wide flume’s mid-line. Because the data were too sparse to assimilate the 30 171 model state vector, we chose to interpolate the data both in time and in space. The results of the assimilation were compared with that of a pure simulation. We have found that the results revealed by the VEnKF were more realistic, without numerical artifacts present in the pure simulation. Creating a wrapper code for a model and DA scheme might be challenging, especially when the two were designed independently or are poorly documented. In this thesis we have presented a non-intrusive approach of coupling the model and a DA scheme. An external program is used to send and receive information between the model and DA procedure using files. The advantage of this method is that the model code changes needed are minimal, only a few lines which facilitate input and output. Apart from being simple to coupling, the approach can be employed even if the two were written in different programming languages, because the communication is not through code. The non-intrusive approach is made to accommodate parallel computing by just telling the control program to wait until all the processes have ended before the DA procedure is invoked. It is worth mentioning the overhead increase caused by the approach, as at every assimilation cycle both the model and the DA procedure have to be initialized. Nonetheless, the method can be an ideal approach for a benchmark platform in testing DA methods. The non-intrusive VEnKF has been applied to a multi-purpose hydrodynamic model COHERENS to assimilate Total Suspended Matter (TSM) in lake Säkylän Pyhäjärvi. The lake has an area of 154 km2 with an average depth of 5:4 m. Turbidity and chlorophyll-a concentrations from MERIS satellite images for 7 days between May 16 and July 6 2009 were available. The effect of the organic matter has been computationally eliminated to obtain TSM data. Because of computational demands from both COHERENS and VEnKF, we have chosen to use 1 km grid resolution. The results of the VEnKF have been compared with the measurements recorded at an automatic station located at the North-Western part of the lake. However, due to TSM data sparsity in both time and space, it could not be well matched. The use of multiple automatic stations with real time data is important to elude the time sparsity problem. With DA, this will help in better understanding the environmental hazard variables for instance. We have found that using a very high ensemble size does not necessarily improve the results, because there is a limit whereby additional ensemble members add very little to the performance. Successful implementation of the non-intrusive VEnKF and the ensemble size limit for performance leads to an emerging area of Reduced Order Modeling (ROM). To save computational resources, running full-blown model in ROM is avoided. When the ROM is applied with the non-intrusive DA approach, it might result in a cheaper algorithm that will relax computation challenges existing in the field of modelling and DA.
Resumo:
Atualmente, uma organização industrial com vista a singrar no mercado global é fortemente influenciada por pressões que visam o aumento da eficiência global e consequente redução de custos operacionais. O desafio para as mesmas passa, portanto, por expurgar do produto tudo aquilo que não lhe acrescenta valor percetível pelo cliente e por maximizar a utilização dos vários recursos industriais instalados. No seguimento deste desafio, surge o Problema de Planeamento e Programação da Produção, ao qual é necessário dar uma resposta eficiente. Este projeto tem como objetivo estudar o problema da Programação da Produção numa indústria de pavimentos e revestimentos cerâmicos, desenvolvendo uma heurística construtiva capaz de traduzir com fiabilidade a realidade do processo produtivo da mesma e, se possível, auxiliar na sua resolução. O problema da programação da produção em estudo visa responder às questões: o quê, em que quantidade, quando e em que linha produzir, por forma a satisfazer as necessidades dos clientes num prazo previamente estipulado como admissível, garantindo o enchimento dos fornos ligados. Sem grandes constrangimentos ao normal lavor da Produção, pretende obter-se com a heurística planos de produção viáveis, que minimizem o tempo necessário para a conclusão do conjunto de referências com necessidades produtivas. O problema é também abordado através de um modelo exato como um problema de máquinas paralelas idênticas capacitado, com matriz de compatibilidades, setups de família e de subfamília e com lotes mínimos de produção. Quer a heurística quer o modelo de programação inteira mista desenvolvidos permitem obter planos de produção válidos, equivalentes aos obtidos atualmente pela empresa através dos meios de programação atuais, embora com um dispêndio de tempo muito inferior.
Resumo:
The share of variable renewable energy in electricity generation has seen exponential growth during the recent decades, and due to the heightened pursuit of environmental targets, the trend is to continue with increased pace. The two most important resources, wind and insolation both bear the burden of intermittency, creating a need for regulation and posing a threat to grid stability. One possibility to deal with the imbalance between demand and generation is to store electricity temporarily, which was addressed in this thesis by implementing a dynamic model of adiabatic compressed air energy storage (CAES) with Apros dynamic simulation software. Based on literature review, the existing models due to their simplifications were found insufficient for studying transient situations, and despite of its importance, the investigation of part load operation has not yet been possible with satisfactory precision. As a key result of the thesis, the cycle efficiency at design point was simulated to be 58.7%, which correlated well with literature information, and was validated through analytical calculations. The performance at part load was validated against models shown in literature, showing good correlation. By introducing wind resource and electricity demand data to the model, grid operation of CAES was studied. In order to enable the dynamic operation, start-up and shutdown sequences were approximated in dynamic environment, as far as is known, the first time, and a user component for compressor variable guide vanes (VGV) was implemented. Even in the current state, the modularly designed model offers a framework for numerous studies. The validity of the model is limited by the accuracy of VGV correlations at part load, and in addition the implementation of heat losses to the thermal energy storage is necessary to enable longer simulations. More extended use of forecasts is one of the important targets of development, if the system operation is to be optimised in future.
Resumo:
Datacenters have emerged as the dominant form of computing infrastructure over the last two decades. The tremendous increase in the requirements of data analysis has led to a proportional increase in power consumption and datacenters are now one of the fastest growing electricity consumers in the United States. Another rising concern is the loss of throughput due to network congestion. Scheduling models that do not explicitly account for data placement may lead to a transfer of large amounts of data over the network causing unacceptable delays. In this dissertation, we study different scheduling models that are inspired by the dual objectives of minimizing energy costs and network congestion in a datacenter. As datacenters are equipped to handle peak workloads, the average server utilization in most datacenters is very low. As a result, one can achieve huge energy savings by selectively shutting down machines when demand is low. In this dissertation, we introduce the network-aware machine activation problem to find a schedule that simultaneously minimizes the number of machines necessary and the congestion incurred in the network. Our model significantly generalizes well-studied combinatorial optimization problems such as hard-capacitated hypergraph covering and is thus strongly NP-hard. As a result, we focus on finding good approximation algorithms. Data-parallel computation frameworks such as MapReduce have popularized the design of applications that require a large amount of communication between different machines. Efficient scheduling of these communication demands is essential to guarantee efficient execution of the different applications. In the second part of the thesis, we study the approximability of the co-flow scheduling problem that has been recently introduced to capture these application-level demands. Finally, we also study the question, "In what order should one process jobs?'' Often, precedence constraints specify a partial order over the set of jobs and the objective is to find suitable schedules that satisfy the partial order. However, in the presence of hard deadline constraints, it may be impossible to find a schedule that satisfies all precedence constraints. In this thesis we formalize different variants of job scheduling with soft precedence constraints and conduct the first systematic study of these problems.
Resumo:
The underwater environment is an extreme environment that requires a process of human adaptation with specific psychophysiological demands to ensure survival and productive activity. From the standpoint of existing models of intelligence, personality and performance, in this explanatory study we have analyzed the contribution of individual differences in explaining the adaptation of military personnel in a stressful environment. Structural equation analysis was employed to verify a model representing the direct effects of psychological variables on individual adaptation to an adverse environment, and we have been able to confirm, during basic military diving courses, the structural relationships among these variables and their ability to predict a third of the variance of a criterion that has been studied very little to date. In this way, we have confirmed in a sample of professionals (N = 575) the direct relationship of emotional adjustment, conscientiousness and general mental ability with underwater adaptation, as well as the inverse relationship of emotional reactivity. These constructs are the psychological basis for working under water, contributing to an improved adaptation to this environment and promoting risk prevention and safety in diving activities.
Resumo:
Nowadays, risks arising from the rapid development of oil and gas industries are significantly increasing. As a result, one of the main concerns of either industrial or environmental managers is the identification and assessment of such risks in order to develop and maintain appropriate proactive measures. Oil spill from stationary sources in offshore zones is one of the accidents resulting in several adverse impacts on marine ecosystems. Considering a site's current situation and relevant requirements and standards, risk assessment process is not only capable of recognizing the probable causes of accidents but also of estimating the probability of occurrence and the severity of consequences. In this way, results of risk assessment would help managers and decision makers create and employ proper control methods. Most of the represented models for risk assessment of oil spills are achieved on the basis of accurate data bases and analysis of historical data, but unfortunately such data bases are not accessible in most of the zones, especially in developing countries, or else they are newly established and not applicable yet. This issue reveals the necessity of using Expert Systems and Fuzzy Set Theory. By using such systems it will be possible to formulize the specialty and experience of several experts and specialists who have been working in petroliferous areas for several years. On the other hand, in developing countries often the damages to environment and environmental resources are not considered as risk assessment priorities and they are approximately under-estimated. For this reason, the proposed model in this research is specially addressing the environmental risk of oil spills from stationary sources in offshore zones.
Resumo:
The Brazilian agricultural research agency has, over the years, contributed to solve social problems and to promote new knowledge, incorporating new advances and seeking technological independence of the country, through the transfer of knowledge and technology generated. However, the process of transfering of knowledge and technology has represented a big challenge for public institutions. The Embrapa is the largest and main brazilian agricultural research company, with a staff of 9.790 employees, being 2.440 researchers and an annual budget of R$ 2.52 billion. Operates through 46 decentralized research units, and coordinate of the National Agricultural Research System - SNPA. Considering that technology transfer is the consecration of effort and resources spent for the generation of knowledge and the validity of the research, this work aims to conduct an assessment of the performance of Embrapa Swine and Poultry along the production chain of broilers and propose a technology transfer model for this chain, which can be used by the Public Institutions Research – IPPs. This study is justified by the importance of agricultural research for the country, and the importance of the institution addressed. The methodology used was the case study with a qualitative approach, documentary and bibliographic research and interviews with use of semi-structured questionnaires. The survey was conducted in three stages. In the first stage, there was a diagnosis of the Technology Transfer Process (TT), the contribution of the Embrapa Swine and poultry for the supply chain for broiler. At this stage it was used bibliographical and documentary research and semi- structured interviews with agroindustrial broiler agents, researchers at Embrapa Swine and Poultry, professionals of technology transfer, from the Embrapa and Embrapa Swine and Poultry, managers of technology transfer and researchers from the Agricultural Research Service - ARS. In the second step, a model was developed for the technology transferring poultry process of Embrapa. In this phase, there were made documentary and bibliographic research and analysis of information obtained in the interviews. The third phase was to validate the proposed model in the various sectors of the broilers productive chain. The data show that, although the Embrapa Swine and Poultry develops technologies for broiler production chain, the rate of adoption of these technologies by the chain is very low. It was also diagnosed that there is a gap between the institution and the various links of the chain. It was proposed an observatory mechanism to approximate Embrapa Swine and Poultry and the agents of the broiler chain for identifying and discussing research priorities. The proposed model seeks to improve the interaction between the institution and the chain, in order to identify the chain real research demands and the search and the joint development of solutions for these demands. The proposed TT model was approved by a large majority (96.77%) of the interviewed agents who work in the various links in the chain, as well as by representatives (92%) of the entities linked to this chain. The acceptance of the proposed model demonstrates the willingness of the chain to approach Embrapa Swine and Poultry, and to seek joint solutions to existing problems.
Resumo:
BACKGROUND: Regional differences in physician supply can be found in many health care systems, regardless of their organizational and financial structure. A theoretical model is developed for the physicians' decision on office allocation, covering demand-side factors and a consumption time function. METHODS: To test the propositions following the theoretical model, generalized linear models were estimated to explain differences in 412 German districts. Various factors found in the literature were included to control for physicians' regional preferences. RESULTS: Evidence in favor of the first three propositions of the theoretical model could be found. Specialists show a stronger association to higher populated districts than GPs. Although indicators for regional preferences are significantly correlated with physician density, their coefficients are not as high as population density. CONCLUSIONS: If regional disparities should be addressed by political actions, the focus should be to counteract those parameters representing physicians' preferences in over- and undersupplied regions.
Resumo:
Since turning professional in 1995 there have been considerable advances in the research on the demands of rugby union, largely using Global Positioning System (GPS) analysis over the last 10 years. A systematic review on the use of GPS, particularly the setting of absolute (ABS) and individual (IND) velocity bands in field based, intermittent, high-intensity (HI) team sports was undertaken. From 3669 records identified, 38 studies were included for qualitative analysis. Little agreement on the definition of movement intensities within team sports was found, only three papers, all on rugby union, had used IND bands, with only one comparing ABS and IND methods. Thus, the aim of this study was to determine if there is a difference in the demands within positions when comparing ABS and IND methods for GPS analysis and if these differences are significantly different between the forward and back positional groups. A total of 214 data files were recorded from 26 players in 17 matches of the 2015/2016 Scottish BT Premiership. ABS velocity zones 1-7 were set at 1) 0-6, 2) 6.1-11, 3) 11.1-15, 4) 15.1-18, 5) 18.1-21, 6) 21.1-15 and 7) 25.1-40km.h-1 while IND zones 1-7 were 1) <20, 2) 20-40, 3) 40-50, 4) 50-70, 5) 70-80, 6) 80-95 and 7) 95-100% of player’s individually determined maximum velocity (Vmax). A 40m sprint test measured Vmax using OptaPro S4 10 Hz (catapult, Australia) GPS units to derive IND bands. The same GPS units were worn during matches. GPS outputs analysed were % distance, % time, high intensity efforts (HIEs) over 18.1 km.h-1 / 70% max velocity and repeated high intensity efforts (RHIEs) which consists of three HIEs in 21secs. General linear model (GLM) analysis identified a significant difference in the measurement of % total distance covered, between the ABS and IND methods in all zones for forwards (p<0.05) and backs (p<0.05). This difference was also significant between forwards and backs in zones 1, shown as mean difference ± standard deviation (3.7±0.7%), 6 (1.2±0.4%) and 7 (1.0±0.0%) respectively (p<0.05). Percentage time estimations were significantly different between ABS and IND analysis within forwards in zones 1 (1.7±1.7%), 2 (-2.9±1.3%), 3 (1.9±0.8%), 4 (-1.4±0.8%) and 5 (0.2±0.4%), and within backs in zones 1 (-10±1.5%), 2 (-1.2±1.1%), 3 (1.8±0.9%) and 5 (0.6±0.5%) (p<0.05). The difference between groups was significant in zones 1, 2, 4 and 5 (p<0.05). The number of HIEs was significantly different between forwards and backs in zones 6 (6±2) and 7 (3±2). RHIEs were significantly different between ABS and IND for forwards (1±2, p<0.05) although not between groups. Until more research on the differences in ABS and IND methods is carried out, then neither can be deemed a criterion method. In conclusion, there are significant differences between the ABS and IND methods of GPS analysis of the physical demands of rugby union, which must be considered when used to inform training load and recovery to improve performance and reduce injuries.
Resumo:
International audience