923 resultados para personnel and shift scheduling
Resumo:
La programmation par contraintes est une technique puissante pour résoudre, entre autres, des problèmes d’ordonnancement de grande envergure. L’ordonnancement vise à allouer dans le temps des tâches à des ressources. Lors de son exécution, une tâche consomme une ressource à un taux constant. Généralement, on cherche à optimiser une fonction objectif telle la durée totale d’un ordonnancement. Résoudre un problème d’ordonnancement signifie trouver quand chaque tâche doit débuter et quelle ressource doit l’exécuter. La plupart des problèmes d’ordonnancement sont NP-Difficiles. Conséquemment, il n’existe aucun algorithme connu capable de les résoudre en temps polynomial. Cependant, il existe des spécialisations aux problèmes d’ordonnancement qui ne sont pas NP-Complet. Ces problèmes peuvent être résolus en temps polynomial en utilisant des algorithmes qui leur sont propres. Notre objectif est d’explorer ces algorithmes d’ordonnancement dans plusieurs contextes variés. Les techniques de filtrage ont beaucoup évolué dans les dernières années en ordonnancement basé sur les contraintes. La proéminence des algorithmes de filtrage repose sur leur habilité à réduire l’arbre de recherche en excluant les valeurs des domaines qui ne participent pas à des solutions au problème. Nous proposons des améliorations et présentons des algorithmes de filtrage plus efficaces pour résoudre des problèmes classiques d’ordonnancement. De plus, nous présentons des adaptations de techniques de filtrage pour le cas où les tâches peuvent être retardées. Nous considérons aussi différentes propriétés de problèmes industriels et résolvons plus efficacement des problèmes où le critère d’optimisation n’est pas nécessairement le moment où la dernière tâche se termine. Par exemple, nous présentons des algorithmes à temps polynomial pour le cas où la quantité de ressources fluctue dans le temps, ou quand le coût d’exécuter une tâche au temps t dépend de t.
Resumo:
Abstract Scheduling problems are generally NP-hard combinatorial problems, and a lot of research has been done to solve these problems heuristically. However, most of the previous approaches are problem-specific and research into the development of a general scheduling algorithm is still in its infancy. Mimicking the natural evolutionary process of the survival of the fittest, Genetic Algorithms (GAs) have attracted much attention in solving difficult scheduling problems in recent years. Some obstacles exist when using GAs: there is no canonical mechanism to deal with constraints, which are commonly met in most real-world scheduling problems, and small changes to a solution are difficult. To overcome both difficulties, indirect approaches have been presented (in [1] and [2]) for nurse scheduling and driver scheduling, where GAs are used by mapping the solution space, and separate decoding routines then build solutions to the original problem. In our previous indirect GAs, learning is implicit and is restricted to the efficient adjustment of weights for a set of rules that are used to construct schedules. The major limitation of those approaches is that they learn in a non-human way: like most existing construction algorithms, once the best weight combination is found, the rules used in the construction process are fixed at each iteration. However, normally a long sequence of moves is needed to construct a schedule and using fixed rules at each move is thus unreasonable and not coherent with human learning processes. When a human scheduler is working, he normally builds a schedule step by step following a set of rules. After much practice, the scheduler gradually masters the knowledge of which solution parts go well with others. He can identify good parts and is aware of the solution quality even if the scheduling process is not completed yet, thus having the ability to finish a schedule by using flexible, rather than fixed, rules. In this research we intend to design more human-like scheduling algorithms, by using ideas derived from Bayesian Optimization Algorithms (BOA) and Learning Classifier Systems (LCS) to implement explicit learning from past solutions. BOA can be applied to learn to identify good partial solutions and to complete them by building a Bayesian network of the joint distribution of solutions [3]. A Bayesian network is a directed acyclic graph with each node corresponding to one variable, and each variable corresponding to individual rule by which a schedule will be constructed step by step. The conditional probabilities are computed according to an initial set of promising solutions. Subsequently, each new instance for each node is generated by using the corresponding conditional probabilities, until values for all nodes have been generated. Another set of rule strings will be generated in this way, some of which will replace previous strings based on fitness selection. If stopping conditions are not met, the Bayesian network is updated again using the current set of good rule strings. The algorithm thereby tries to explicitly identify and mix promising building blocks. It should be noted that for most scheduling problems the structure of the network model is known and all the variables are fully observed. In this case, the goal of learning is to find the rule values that maximize the likelihood of the training data. Thus learning can amount to 'counting' in the case of multinomial distributions. In the LCS approach, each rule has its strength showing its current usefulness in the system, and this strength is constantly assessed [4]. To implement sophisticated learning based on previous solutions, an improved LCS-based algorithm is designed, which consists of the following three steps. The initialization step is to assign each rule at each stage a constant initial strength. Then rules are selected by using the Roulette Wheel strategy. The next step is to reinforce the strengths of the rules used in the previous solution, keeping the strength of unused rules unchanged. The selection step is to select fitter rules for the next generation. It is envisaged that the LCS part of the algorithm will be used as a hill climber to the BOA algorithm. This is exciting and ambitious research, which might provide the stepping-stone for a new class of scheduling algorithms. Data sets from nurse scheduling and mall problems will be used as test-beds. It is envisaged that once the concept has been proven successful, it will be implemented into general scheduling algorithms. It is also hoped that this research will give some preliminary answers about how to include human-like learning into scheduling algorithms and may therefore be of interest to researchers and practitioners in areas of scheduling and evolutionary computation. References 1. Aickelin, U. and Dowsland, K. (2003) 'Indirect Genetic Algorithm for a Nurse Scheduling Problem', Computer & Operational Research (in print). 2. Li, J. and Kwan, R.S.K. (2003), 'Fuzzy Genetic Algorithm for Driver Scheduling', European Journal of Operational Research 147(2): 334-344. 3. Pelikan, M., Goldberg, D. and Cantu-Paz, E. (1999) 'BOA: The Bayesian Optimization Algorithm', IlliGAL Report No 99003, University of Illinois. 4. Wilson, S. (1994) 'ZCS: A Zeroth-level Classifier System', Evolutionary Computation 2(1), pp 1-18.
Resumo:
Abstract Scheduling problems are generally NP-hard combinatorial problems, and a lot of research has been done to solve these problems heuristically. However, most of the previous approaches are problem-specific and research into the development of a general scheduling algorithm is still in its infancy. Mimicking the natural evolutionary process of the survival of the fittest, Genetic Algorithms (GAs) have attracted much attention in solving difficult scheduling problems in recent years. Some obstacles exist when using GAs: there is no canonical mechanism to deal with constraints, which are commonly met in most real-world scheduling problems, and small changes to a solution are difficult. To overcome both difficulties, indirect approaches have been presented (in [1] and [2]) for nurse scheduling and driver scheduling, where GAs are used by mapping the solution space, and separate decoding routines then build solutions to the original problem. In our previous indirect GAs, learning is implicit and is restricted to the efficient adjustment of weights for a set of rules that are used to construct schedules. The major limitation of those approaches is that they learn in a non-human way: like most existing construction algorithms, once the best weight combination is found, the rules used in the construction process are fixed at each iteration. However, normally a long sequence of moves is needed to construct a schedule and using fixed rules at each move is thus unreasonable and not coherent with human learning processes. When a human scheduler is working, he normally builds a schedule step by step following a set of rules. After much practice, the scheduler gradually masters the knowledge of which solution parts go well with others. He can identify good parts and is aware of the solution quality even if the scheduling process is not completed yet, thus having the ability to finish a schedule by using flexible, rather than fixed, rules. In this research we intend to design more human-like scheduling algorithms, by using ideas derived from Bayesian Optimization Algorithms (BOA) and Learning Classifier Systems (LCS) to implement explicit learning from past solutions. BOA can be applied to learn to identify good partial solutions and to complete them by building a Bayesian network of the joint distribution of solutions [3]. A Bayesian network is a directed acyclic graph with each node corresponding to one variable, and each variable corresponding to individual rule by which a schedule will be constructed step by step. The conditional probabilities are computed according to an initial set of promising solutions. Subsequently, each new instance for each node is generated by using the corresponding conditional probabilities, until values for all nodes have been generated. Another set of rule strings will be generated in this way, some of which will replace previous strings based on fitness selection. If stopping conditions are not met, the Bayesian network is updated again using the current set of good rule strings. The algorithm thereby tries to explicitly identify and mix promising building blocks. It should be noted that for most scheduling problems the structure of the network model is known and all the variables are fully observed. In this case, the goal of learning is to find the rule values that maximize the likelihood of the training data. Thus learning can amount to 'counting' in the case of multinomial distributions. In the LCS approach, each rule has its strength showing its current usefulness in the system, and this strength is constantly assessed [4]. To implement sophisticated learning based on previous solutions, an improved LCS-based algorithm is designed, which consists of the following three steps. The initialization step is to assign each rule at each stage a constant initial strength. Then rules are selected by using the Roulette Wheel strategy. The next step is to reinforce the strengths of the rules used in the previous solution, keeping the strength of unused rules unchanged. The selection step is to select fitter rules for the next generation. It is envisaged that the LCS part of the algorithm will be used as a hill climber to the BOA algorithm. This is exciting and ambitious research, which might provide the stepping-stone for a new class of scheduling algorithms. Data sets from nurse scheduling and mall problems will be used as test-beds. It is envisaged that once the concept has been proven successful, it will be implemented into general scheduling algorithms. It is also hoped that this research will give some preliminary answers about how to include human-like learning into scheduling algorithms and may therefore be of interest to researchers and practitioners in areas of scheduling and evolutionary computation. References 1. Aickelin, U. and Dowsland, K. (2003) 'Indirect Genetic Algorithm for a Nurse Scheduling Problem', Computer & Operational Research (in print). 2. Li, J. and Kwan, R.S.K. (2003), 'Fuzzy Genetic Algorithm for Driver Scheduling', European Journal of Operational Research 147(2): 334-344. 3. Pelikan, M., Goldberg, D. and Cantu-Paz, E. (1999) 'BOA: The Bayesian Optimization Algorithm', IlliGAL Report No 99003, University of Illinois. 4. Wilson, S. (1994) 'ZCS: A Zeroth-level Classifier System', Evolutionary Computation 2(1), pp 1-18.
Resumo:
ISO 9000 is a family of international standards for quality management, applicable to all sizes of company, whether public or private.Management Systems ISO 9000 quality make up the human side, administrative and operating companies. By integrating these three aspects, the organization takes full advantage of all its resources, making results more efficiently, reducing administrative and operating expenses.With globalization and opening markets this has become a competitive advantage by providing further confidence and evidence to all customers, subcontractors, personnel and other stakeholders that the organization is committed to establishing, maintaining and improving levels acceptable quality products and services.Another advantage of quality systems is the clear definition of policies and functions, the staff is utilized according to their ability and focus on real customer needs.It should be mentioned that to achieve these benefits, it is necessary that management of the organization, is committed to the development of its quality system and to allocate financial and human resources to do so. These resources are minimal compared with the benefits you can achieve.
Resumo:
“Knowing the Enemy: Nazi Foreign Intelligence in War, Holocaust and Postwar,” reveals the importance of ideologically-driven foreign intelligence reporting in the wartime radicalization of the Nazi dictatorship, and the continued prominence of Nazi discourses in postwar reports from German intelligence officers working with the U.S. Army and West German Federal Intelligence Service after 1945. For this project, I conducted extensive archival research in Germany and the United States, particularly in overlooked and files pertaining to the wartime activities of the Reichssicherheitshauptamt, Abwehr, Fremde Heere Ost, Auswärtiges Amt, and German General Staff, and the recently declassified intelligence files pertaining to the postwar activities of the Gehlen Organization, Bundesnachrichtendienst, and Foreign Military Studies Program. Applying the technique of close textual analysis to the underutilized intelligence reports themselves, I discovered that wartime German intelligence officials in military, civil service, and Party institutions all lent the appearance of professional objectivity to the racist and conspiratorial foreign policy beliefs held in the highest echelons of the Nazi dictatorship. The German foreign intelligence services’ often erroneous reporting on Great Britain, the Soviet Union, the United States, and international Jewry simultaneously figured in the radicalization of the regime’s military and anti-Jewish policies and served to confirm the ideological preconceptions of Hitler and his most loyal followers. After 1945, many of these same figures found employment with the Cold War West, using their “expertise” in Soviet affairs to advise the West German Government, U.S. Military, and CIA on Russian military and political matters. I chart considerable continuities in personnel and ideas from the wartime intelligence organizations into postwar West German and American intelligence institutions, as later reporting on the Soviet Union continued to reproduce the flawed wartime tropes of innate Russian military and racial inferiority.
Resumo:
Introducción: La relación entre el sueño y la calidad de vida constituye una de las problemáticas de gran importancia en el ámbito de las condiciones de trabajo de funcionarios y personal médico de las unidades prestadoras de servicios hospitalarios. Estudios han evidenciado una relación entre la calidad del sueño y la calidad de vida y la falta de sueño se ha asociado con errores en los procedimientos y lesiones ocupacionales. Objetivo: Relacionar la calidad del sueño con la calidad de vida en personal de salud de una Institución de IV nivel, en la ciudad de Caracas (Venezuela). Materiales y métodos: Estudio de corte transversal con datos secundarios del personal de salud de un Hospital de IV nivel (93 registros) en la ciudad de Caracas (Venezuela). Se emplearon variables sociodemográficas, las relacionadas con calidad del sueño y provenientes de la encuesta “Índice de calidad de sueño de Pittsburgh” y con calidad de vida incluidas en el cuestionario SF-36. Se utilizo el programa estadístico SPSS para el análisis y se obtuvieron medidas de tendencia central y dispersión. Para relacionar las variables se emplearon las pruebas de Shapiro Wilk y el coeficiente de correlación de Spearman. Resultados: El total de los trabadores que ingresaron al estudio tuvieron un rango de edad entre 19 y 70 años y una desviación estándar de 10,9 años. Respecto al género, el 79,6% (n=74) fueron mujeres, y el 20,4% (n=19) fueron hombres. Con relación al componente de calidad de vida, se encontró que la mayor puntuación se asocia con el desempeño emocional (61,3%), la Vitalidad (73,5%), la Función Física (91%), el Dolor Físico (100%) y la Función Social (100%). Igualmente, se encontró que la totalidad de los trabajadores encuestados refirieron ser malos dormidores (91.4%). Al correlacionar la calidad de sueño con la calidad de vida, se encontró una asociación estadísticamente significativa, específicamente con el componente Latencia de sueño (p=0.008), Eficiencia habitual de sueño (p=0,001), Perturbaciones del Sueño (p=0,040) y Disfunción diurna (p= 0,008). Conclusión Este estudio reporto que la falta de sueño tiene relación con la calidad de vida del personal de salud y que la totalidad de los trabajadores de este estudio refirieron ser malos dormidores, hechos que demandan la atención de los programas de salud de las empresas, para promover medidas preventivas y correctivas respecto a las condiciones laborales como parte del bienestar de las personas.
Resumo:
In this paper, a joint location-inventory model is proposed that simultaneously optimises strategic supply chain design decisions such as facility location and customer allocation to facilities, and tactical-operational inventory management and production scheduling decisions. All this is analysed in a context of demand uncertainty and supply uncertainty. While demand uncertainty stems from potential fluctuations in customer demands over time, supply-side uncertainty is associated with the risk of “disruption” to which facilities may be subject. The latter is caused by external factors such as natural disasters, strikes, changes of ownership and information technology security incidents. The proposed model is formulated as a non-linear mixed integer programming problem to minimise the expected total cost, which includes four basic cost items: the fixed cost of locating facilities at candidate sites, the cost of transport from facilities to customers, the cost of working inventory, and the cost of safety stock. Next, since the optimisation problem is very complex and the number of evaluable instances is very low, a "matheuristic" solution is presented. This approach has a twofold objective: on the one hand, it considers a larger number of facilities and customers within the network in order to reproduce a supply chain configuration that more closely reflects a real-world context; on the other hand, it serves to generate a starting solution and perform a series of iterations to try to improve it. Thanks to this algorithm, it was possible to obtain a solution characterised by a lower total system cost than that observed for the initial solution. The study concludes with some reflections and the description of possible future insights.
Resumo:
Universidade Estadual de Campinas . Faculdade de Educação Física
Resumo:
Universidade Estadual de Campinas . Faculdade de Educação Física
Resumo:
Universidade Estadual de Campinas . Faculdade de Educação Física
Resumo:
OBJETIVO: Estimar a prevalência de hipertensão arterial entre militares jovens e fatores associados. MÉTODOS: Estudo transversal realizado com amostra de 380 militares do sexo masculino de 19 e 35 anos de idade em uma unidade da Força Aérea Brasileira em São Paulo, SP, entre 2000 e 2001. Os pontos de corte para hipertensão foram: >140mmHg para pressão sistólica e > 90mmHg para pressão diastólica. As variáveis estudadas incluíram fatores de risco e de proteção para hipertensão, como características comportamentais e nutricionais. Para análise das associações, utilizou-se regressão linear generalizada múltipla, com família binomial e ligação logarítmica, obtendo-se razões de prevalências com intervalo de 90% de confiança e seleção hierarquizada das variáveis. RESULTADOS: A prevalência de hipertensão arterial foi de 22% (IC 90%: 21;29). No modelo final da regressão múltipla verificou-se prevalência de hipertensão 68% maior entre os ex-fumantes em relação aos não fumantes (IC 90%: 1,13;2,50). Entre os indivíduos com sobrepeso (índice de massa corporal - IMC de 25 a 29kg/m2) e com obesidade (IMC>29kg/m2) as prevalências foram, respectivamente, 75% (IC 90%: 1,23;2,50) e 178% (IC 90%: 1,82;4,25) maiores do que entre os eutróficos. Entre os que praticavam atividade física regular, comparado aos que não praticavam, a prevalência foi 52% menor (IC 90%: 0,30;0,90). CONCLUSÕES: Ser ex-fumante e ter sobrepeso ou obesidade foram situações de risco para hipertensão, enquanto que a prática regular de atividade física foi fator de proteção em militares jovens.
Resumo:
A ação da Faculdade de Saúde Pública da Universidade de São Paulo na luta contra o tabagismo teve início em 1975, quando a instituição participou da III Conferência Mundial de Fumo e Saúde, realizada em New York (EUA). Depois de três décadas de trabalho ininterrupto, ela recebeu, em 2008, da Secretaria de Estado da Saúde de São Paulo, o selo prata de certificação de ambiente livre do tabaco. Nesse espaço de tempo, ao lado de um trabalho educativo, realizado corpo a corpo com docentes, funcionários e alunos, foram realizadas pesquisas, treinamentos e desenvolvido toda uma programação orientada pelo Ministério da Saúde / Instituto Nacional do Câncer. Foram também produzidas inúmeras monografias de mestrado, teses de doutorado e de livre docência, tendo como tema o tabagismo do ponto de vista educativo, social, médico e sanitário. Este artigo pretendeu fazer o relato dessa trajetória
Resumo:
An adult female red-faced black spider monkey (Ateles paniscus), housed for 2 years in the Parque Estoril Zoo in Sao Paulo, Brazil, showed apathy. Clinical examination revealed discrete emaciation, swelling and induration of lymph nodes, and presence of a mass in the abdominal cavity. Therapies with enrofloxacin, azithromycin, and ceftiofur were ineffective. The animal died after 6 months. Necropsy and histopathology confirmed granulommas in lymph nodes, parietal and visceral pleura, lungs, liver, spleen, and kidneys. Acid-fast bacilli were isolated and identified as Mycobacterium tuberculosis by polymerase chain reaction restriction analysis and Spoligotyping techniques. The zoo personnel and other animals that had had contact with the infected primate were negative to tuberculosis diagnostic procedures, such as sputum exam (baciloscopy) and thorax radiography. It was impossible to determine whether the infection occurred before or after the arrival of the animal to the Parque Estoril Zoo. This is the first report of M. tuberculosis infection in Ateles paniscus, a neotropical primate.
Resumo:
We study quasinormal modes and scattering properties via calculation of the S matrix for scalar and electromagnetic fields propagating in the background of spherically symmetric and axially symmetric traversable Lorentzian wormholes of a generic shape. Such wormholes are described by the general Morris-Thorne ansatz. The properties of quasinormal ringing and scattering are shown to be determined by the behavior of the wormhole's shape function b(r) and shift factor Phi(r) near the throat. In particular, wormholes with the shape function b(r), such that b(dr) approximate to 1, have very long-lived quasinormal modes in the spectrum. We have proved that the axially symmetric traversable Lorentzian wormholes, unlike black holes and other compact rotating objects, do not allow for superradiance. As a by-product we have shown that the 6th order WKB formula used for scattering problems of black or wormholes gives quite high accuracy and thus can be used for quite accurate calculations of the Hawking radiation processes around various black holes.
Resumo:
Leaf wetness duration (LWD) is related to plant disease occurrence and is therefore a key parameter in agrometeorology. As LWD is seldom measured at standard weather stations, it must be estimated in order to ensure the effectiveness of warning systems and the scheduling of chemical disease control. Among the models used to estimate LWD, those that use physical principles of dew formation and dew and/or rain evaporation have shown good portability and sufficiently accurate results for operational use. However, the requirement of net radiation (Rn) is a disadvantage foroperational physical models, since this variable is usually not measured over crops or even at standard weather stations. With the objective of proposing a solution for this problem, this study has evaluated the ability of four models to estimate hourly Rn and their impact on LWD estimates using a Penman-Monteith approach. A field experiment was carried out in Elora, Ontario, Canada, with measurements of LWD, Rn and other meteorological variables over mowed turfgrass for a 58 day period during the growing season of 2003. Four models for estimating hourly Rn based on different combinations of incoming solar radiation (Rg), airtemperature (T), relative humidity (RH), cloud cover (CC) and cloud height (CH), were evaluated. Measured and estimated hourly Rn values were applied in a Penman-Monteith model to estimate LWD. Correlating measured and estimated Rn, we observed that all models performed well in terms of estimating hourly Rn. However, when cloud data were used the models overestimated positive Rn and underestimated negative Rn. When only Rg and T were used to estimate hourly Rn, the model underestimated positive Rn and no tendency was observed for negative Rn. The best performance was obtained with Model I, which presented, in general, the smallest mean absolute error (MAE) and the highest C-index. When measured LWD was compared to the Penman-Monteith LWD, calculated with measured and estimated Rn, few differences were observed. Both precision and accuracy were high, with the slopes of the relationships ranging from 0.96 to 1.02 and R-2 from 0.85 to 0.92, resulting in C-indices between 0.87 and 0.93. The LWD mean absolute errors associated with Rn estimates were between 1.0 and 1.5h, which is sufficient for use in plant disease management schemes.