984 resultados para effort model


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract Scheduling problems are generally NP-hard combinatorial problems, and a lot of research has been done to solve these problems heuristically. However, most of the previous approaches are problem-specific and research into the development of a general scheduling algorithm is still in its infancy. Mimicking the natural evolutionary process of the survival of the fittest, Genetic Algorithms (GAs) have attracted much attention in solving difficult scheduling problems in recent years. Some obstacles exist when using GAs: there is no canonical mechanism to deal with constraints, which are commonly met in most real-world scheduling problems, and small changes to a solution are difficult. To overcome both difficulties, indirect approaches have been presented (in [1] and [2]) for nurse scheduling and driver scheduling, where GAs are used by mapping the solution space, and separate decoding routines then build solutions to the original problem. In our previous indirect GAs, learning is implicit and is restricted to the efficient adjustment of weights for a set of rules that are used to construct schedules. The major limitation of those approaches is that they learn in a non-human way: like most existing construction algorithms, once the best weight combination is found, the rules used in the construction process are fixed at each iteration. However, normally a long sequence of moves is needed to construct a schedule and using fixed rules at each move is thus unreasonable and not coherent with human learning processes. When a human scheduler is working, he normally builds a schedule step by step following a set of rules. After much practice, the scheduler gradually masters the knowledge of which solution parts go well with others. He can identify good parts and is aware of the solution quality even if the scheduling process is not completed yet, thus having the ability to finish a schedule by using flexible, rather than fixed, rules. In this research we intend to design more human-like scheduling algorithms, by using ideas derived from Bayesian Optimization Algorithms (BOA) and Learning Classifier Systems (LCS) to implement explicit learning from past solutions. BOA can be applied to learn to identify good partial solutions and to complete them by building a Bayesian network of the joint distribution of solutions [3]. A Bayesian network is a directed acyclic graph with each node corresponding to one variable, and each variable corresponding to individual rule by which a schedule will be constructed step by step. The conditional probabilities are computed according to an initial set of promising solutions. Subsequently, each new instance for each node is generated by using the corresponding conditional probabilities, until values for all nodes have been generated. Another set of rule strings will be generated in this way, some of which will replace previous strings based on fitness selection. If stopping conditions are not met, the Bayesian network is updated again using the current set of good rule strings. The algorithm thereby tries to explicitly identify and mix promising building blocks. It should be noted that for most scheduling problems the structure of the network model is known and all the variables are fully observed. In this case, the goal of learning is to find the rule values that maximize the likelihood of the training data. Thus learning can amount to 'counting' in the case of multinomial distributions. In the LCS approach, each rule has its strength showing its current usefulness in the system, and this strength is constantly assessed [4]. To implement sophisticated learning based on previous solutions, an improved LCS-based algorithm is designed, which consists of the following three steps. The initialization step is to assign each rule at each stage a constant initial strength. Then rules are selected by using the Roulette Wheel strategy. The next step is to reinforce the strengths of the rules used in the previous solution, keeping the strength of unused rules unchanged. The selection step is to select fitter rules for the next generation. It is envisaged that the LCS part of the algorithm will be used as a hill climber to the BOA algorithm. This is exciting and ambitious research, which might provide the stepping-stone for a new class of scheduling algorithms. Data sets from nurse scheduling and mall problems will be used as test-beds. It is envisaged that once the concept has been proven successful, it will be implemented into general scheduling algorithms. It is also hoped that this research will give some preliminary answers about how to include human-like learning into scheduling algorithms and may therefore be of interest to researchers and practitioners in areas of scheduling and evolutionary computation. References 1. Aickelin, U. and Dowsland, K. (2003) 'Indirect Genetic Algorithm for a Nurse Scheduling Problem', Computer & Operational Research (in print). 2. Li, J. and Kwan, R.S.K. (2003), 'Fuzzy Genetic Algorithm for Driver Scheduling', European Journal of Operational Research 147(2): 334-344. 3. Pelikan, M., Goldberg, D. and Cantu-Paz, E. (1999) 'BOA: The Bayesian Optimization Algorithm', IlliGAL Report No 99003, University of Illinois. 4. Wilson, S. (1994) 'ZCS: A Zeroth-level Classifier System', Evolutionary Computation 2(1), pp 1-18.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract Scheduling problems are generally NP-hard combinatorial problems, and a lot of research has been done to solve these problems heuristically. However, most of the previous approaches are problem-specific and research into the development of a general scheduling algorithm is still in its infancy. Mimicking the natural evolutionary process of the survival of the fittest, Genetic Algorithms (GAs) have attracted much attention in solving difficult scheduling problems in recent years. Some obstacles exist when using GAs: there is no canonical mechanism to deal with constraints, which are commonly met in most real-world scheduling problems, and small changes to a solution are difficult. To overcome both difficulties, indirect approaches have been presented (in [1] and [2]) for nurse scheduling and driver scheduling, where GAs are used by mapping the solution space, and separate decoding routines then build solutions to the original problem. In our previous indirect GAs, learning is implicit and is restricted to the efficient adjustment of weights for a set of rules that are used to construct schedules. The major limitation of those approaches is that they learn in a non-human way: like most existing construction algorithms, once the best weight combination is found, the rules used in the construction process are fixed at each iteration. However, normally a long sequence of moves is needed to construct a schedule and using fixed rules at each move is thus unreasonable and not coherent with human learning processes. When a human scheduler is working, he normally builds a schedule step by step following a set of rules. After much practice, the scheduler gradually masters the knowledge of which solution parts go well with others. He can identify good parts and is aware of the solution quality even if the scheduling process is not completed yet, thus having the ability to finish a schedule by using flexible, rather than fixed, rules. In this research we intend to design more human-like scheduling algorithms, by using ideas derived from Bayesian Optimization Algorithms (BOA) and Learning Classifier Systems (LCS) to implement explicit learning from past solutions. BOA can be applied to learn to identify good partial solutions and to complete them by building a Bayesian network of the joint distribution of solutions [3]. A Bayesian network is a directed acyclic graph with each node corresponding to one variable, and each variable corresponding to individual rule by which a schedule will be constructed step by step. The conditional probabilities are computed according to an initial set of promising solutions. Subsequently, each new instance for each node is generated by using the corresponding conditional probabilities, until values for all nodes have been generated. Another set of rule strings will be generated in this way, some of which will replace previous strings based on fitness selection. If stopping conditions are not met, the Bayesian network is updated again using the current set of good rule strings. The algorithm thereby tries to explicitly identify and mix promising building blocks. It should be noted that for most scheduling problems the structure of the network model is known and all the variables are fully observed. In this case, the goal of learning is to find the rule values that maximize the likelihood of the training data. Thus learning can amount to 'counting' in the case of multinomial distributions. In the LCS approach, each rule has its strength showing its current usefulness in the system, and this strength is constantly assessed [4]. To implement sophisticated learning based on previous solutions, an improved LCS-based algorithm is designed, which consists of the following three steps. The initialization step is to assign each rule at each stage a constant initial strength. Then rules are selected by using the Roulette Wheel strategy. The next step is to reinforce the strengths of the rules used in the previous solution, keeping the strength of unused rules unchanged. The selection step is to select fitter rules for the next generation. It is envisaged that the LCS part of the algorithm will be used as a hill climber to the BOA algorithm. This is exciting and ambitious research, which might provide the stepping-stone for a new class of scheduling algorithms. Data sets from nurse scheduling and mall problems will be used as test-beds. It is envisaged that once the concept has been proven successful, it will be implemented into general scheduling algorithms. It is also hoped that this research will give some preliminary answers about how to include human-like learning into scheduling algorithms and may therefore be of interest to researchers and practitioners in areas of scheduling and evolutionary computation. References 1. Aickelin, U. and Dowsland, K. (2003) 'Indirect Genetic Algorithm for a Nurse Scheduling Problem', Computer & Operational Research (in print). 2. Li, J. and Kwan, R.S.K. (2003), 'Fuzzy Genetic Algorithm for Driver Scheduling', European Journal of Operational Research 147(2): 334-344. 3. Pelikan, M., Goldberg, D. and Cantu-Paz, E. (1999) 'BOA: The Bayesian Optimization Algorithm', IlliGAL Report No 99003, University of Illinois. 4. Wilson, S. (1994) 'ZCS: A Zeroth-level Classifier System', Evolutionary Computation 2(1), pp 1-18.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this article, the change in examinee effort during an assessment, which we will refer to as persistence, is modeled as an effect of item position. A multilevel extension is proposed to analyze hierarchically structured data and decompose the individual differences in persistence. Data from the 2009 Program of International Student Achievement (PISA) reading assessment from N = 467,819 students from 65 countries are analyzed with the proposed model, and the results are compared across countries. A decrease in examinee effort during the PISA reading assessment was found consistently across countries, with individual differences within and between schools. Both the decrease and the individual differences are more pronounced in lower performing countries. Within schools, persistence is slightly negatively correlated with reading ability; but at the school level, this correlation is positive in most countries. The results of our analyses indicate that it is important to model and control examinee effort in low-stakes assessments. (DIPF/Orig.)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background In occupational life, a mismatch between high expenditure of effort and receiving few rewards may promote the co-occurrence of lifestyle risk factors, however, there is insufficient evidence to support or refute this hypothesis. The aim of this study is to examine the extent to which the dimensions of the Effort-Reward Imbalance (ERI) modeleffort, rewards and ERI – are associated with the co-occurrence of lifestyle risk factors. Methods Based on data from the Finnish Public Sector Study, cross-sectional analyses were performed for 28,894 women and 7233 men. ERI was conceptualized as a ratio of effort and rewards. To control for individual differences in response styles, such as a personal disposition to answer negatively to questionnaires, occupational and organizational -level ecological ERI scores were constructed in addition to individual-level ERI scores. Risk factors included current smoking, heavy drinking, body mass index ≥25 kg/m2, and physical inactivity. Multinomial logistic regression models were used to estimate the likelihood of having one risk factor, two risk factors, and three or four risk factors. The associations between ERI and single risk factors were explored using binary logistic regression models. Results After adjustment for age, socioeconomic position, marital status, and type of job contract, women and men with high ecological ERI were 40% more likely to have simultaneously ≥3 lifestyle risk factors (vs. 0 risk factors) compared with their counterparts with low ERI. When examined separately, both low ecological effort and low ecological rewards were also associated with an elevated prevalence of risk factor co-occurrence. The results obtained with the individual-level scores were in the same direction. The associations of ecological ERI with single risk factors were generally less marked than the associations with the co-occurrence of risk factors. Conclusion This study suggests that a high ratio of occupational efforts relative to rewards may be associated with an elevated risk of having multiple lifestyle risk factors. However, an unexpected association between low effort and a higher likelihood of risk factor co-occurrence as well as the absence of data on overcommitment (and thereby a lack of full test of the ERI model) warrant caution in regard to the extent to which the entire ERI model is supported by our evidence.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With ‘GS Strategy 2025’ BASF Business Services GmbH was formed to centrally steer all IT related topics of BASF group. Thus, a global charging system has to be designed, which complies to international transfer price regulations and the strategy of BASF SE. This work project develops a charging system with a following evaluation. The direct charging system benefits from its cost transparency upsides but comes with a higher administrative effort due to volume-based charging. In contrast, the indirect charging system convinces because of easy handling, which is the result of the application of suitable allocation keys. Regarding the complex group structure of BASF SE with more than 300 legal entities in 80 countries, the lower administrative effort of the indirect charging system outweighs the benefits of the direct charging model and should be used by BASF group.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Planning, navigation, and search are fundamental human cognitive abilities central to spatial problem solving in search and rescue, law enforcement, and military operations. Despite a wealth of literature concerning naturalistic spatial problem solving in animals, literature on naturalistic spatial problem solving in humans is comparatively lacking and generally conducted by separate camps among which there is little crosstalk. Addressing this deficiency will allow us to predict spatial decision making in operational environments, and understand the factors leading to those decisions. The present dissertation is comprised of two related efforts, (1) a set of empirical research studies intended to identify characteristics of planning, execution, and memory in naturalistic spatial problem solving tasks, and (2) a computational modeling effort to develop a model of naturalistic spatial problem solving. The results of the behavioral studies indicate that problem space hierarchical representations are linear in shape, and that human solutions are produced according to multiple optimization criteria. The Mixed Criteria Model presented in this dissertation accounts for global and local human performance in a traditional and naturalistic Traveling Salesman Problem. The results of the empirical and modeling efforts hold implications for basic and applied science in domains such as problem solving, operations research, human-computer interaction, and artificial intelligence.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We describe the Joint Effort-Topic (JET) model and the Author Joint Effort-Topic (aJET) model that estimate the effort required for users to contribute on different topics. We propose to learn word-level effort taking into account term preference over time and use it to set the priors of our models. Since there is no gold standard which can be easily built, we evaluate them by measuring their abilities to validate expected behaviours such as correlations between user contributions and the associated effort.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Understanding the fluctuations in population abundance is a central question in fisheries. Sardine fisheries is of great importance to Portugal and is data-rich and of primary concern to fisheries managers. In Portugal, sub-stocks of Sardina pilchardus (sardine) are found in different regions: the Northwest (IXaCN), Southwest (IXaCS) and the South coast (IXaS-Algarve). Each of these sardine sub-stocks is affected differently by a unique set of climate and ocean conditions, mainly during larval development and recruitment, which will consequently affect sardine fisheries in the short term. Taking this hypothesis into consideration we examined the effects of hydrographic (river discharge), sea surface temperature, wind driven phenomena, upwelling, climatic (North Atlantic Oscillation) and fisheries variables (fishing effort) on S. pilchardus catch rates (landings per unit effort, LPUE, as a proxy for sardine biomass). A 20-year time series (1989-2009) was used, for the different subdivisions of the Portuguese coast (sardine sub-stocks). For the purpose of this analysis a multi-model approach was used, applying different time series models for data fitting (Dynamic Factor Analysis, Generalised Least Squares), forecasting (Autoregressive Integrated Moving Average), as well as Surplus Production stock assessment models. The different models were evaluated, compared and the most important variables explaining changes in LPUE were identified. The type of relationship between catch rates of sardine and environmental variables varied across regional scales due to region-specific recruitment responses. Seasonality plays an important role in sardine variability within the three study regions. In IXaCN autumn (season with minimum spawning activity, larvae and egg concentrations) SST, northerly wind and wind magnitude were negatively related with LPUE. In IXaCS none of the explanatory variables tested was clearly related with LPUE. In IXaS-Algarve (South Portugal) both spring (period when large abundances of larvae are found) northerly wind and wind magnitude were negatively related with LPUE, revealing that environmental effects match with the regional peak in spawning time. Overall, results suggest that management of small, short-lived pelagic species, such as sardine quotas/sustainable yields, should be adapted to a regional scale because of regional environmental variability.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Ground deformation provides valuable insights on subsurface processes with pattens reflecting the characteristics of the source at depth. In active volcanic sites displacements can be observed in unrest phases; therefore, a correct interpretation is essential to assess the hazard potential. Inverse modeling is employed to obtain quantitative estimates of parameters describing the source. However, despite the robustness of the available approaches, a realistic imaging of these reservoirs is still challenging. While analytical models return quick but simplistic results, assuming an isotropic and elastic crust, more sophisticated numerical models, accounting for the effects of topographic loads, crust inelasticity and structural discontinuities, require much higher computational effort and information about the crust rheology may be challenging to infer. All these approaches are based on a-priori source shape constraints, influencing the solution reliability. In this thesis, we present a new approach aimed at overcoming the aforementioned limitations, modeling sources free of a-priori shape constraints with the advantages of FEM simulations, but with a cost-efficient procedure. The source is represented as an assembly of elementary units, consisting in cubic elements of a regular FE mesh loaded with a unitary stress tensors. The surface response due to each of the six stress tensor components is computed and linearly combined to obtain the total displacement field. In this way, the source can assume potentially any shape. Our tests prove the equivalence of the deformation fields due to our assembly and that of corresponding cavities with uniform boundary pressure. Our ability to simulate pressurized cavities in a continuum domain permits to pre-compute surface responses, avoiding remeshing. A Bayesian trans-dimensional inversion algorithm implementing this strategy is developed. 3D Voronoi cells are used to sample the model domain, selecting the elementary units contributing to the source solution and those remaining inactive as part of the crust.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Understanding the molecular mechanisms of oral carcinogenesis will yield important advances in diagnostics, prognostics, effective treatment, and outcome of oral cancer. Hence, in this study we have investigated the proteomic and peptidomic profiles by combining an orthotopic murine model of oral squamous cell carcinoma (OSCC), mass spectrometry-based proteomics and biological network analysis. Our results indicated the up-regulation of proteins involved in actin cytoskeleton organization and cell-cell junction assembly events and their expression was validated in human OSCC tissues. In addition, the functional relevance of talin-1 in OSCC adhesion, migration and invasion was demonstrated. Taken together, this study identified specific processes deregulated in oral cancer and provided novel refined OSCC-targeting molecules.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Two single crystalline surfaces of Au vicinal to the (111) plane were modified with Pt and studied using scanning tunneling microscopy (STM) and X-ray photoemission spectroscopy (XPS) in ultra-high vacuum environment. The vicinal surfaces studied are Au(332) and Au(887) and different Pt coverage (θPt) were deposited on each surface. From STM images we determine that Pt deposits on both surfaces as nanoislands with heights ranging from 1 ML to 3 ML depending on θPt. On both surfaces the early growth of Pt ad-islands occurs at the lower part of the step edge, with Pt ad-atoms being incorporated into the steps in some cases. XPS results indicate that partial alloying of Pt occurs at the interface at room temperature and at all coverage, as suggested by the negative chemical shift of Pt 4f core line, indicating an upward shift of the d-band center of the alloyed Pt. Also, the existence of a segregated Pt phase especially at higher coverage is detected by XPS. Sample annealing indicates that the temperature rise promotes a further incorporation of Pt atoms into the Au substrate as supported by STM and XPS results. Additionally, the catalytic activity of different PtAu systems reported in the literature for some electrochemical reactions is discussed considering our findings.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Congenital diaphragmatic hernia (CDH) is associated with pulmonary hypertension which is often difficult to manage, and a significant cause of morbidity and mortality. In this study, we have used a rabbit model of CDH to evaluate the effects of BAY 60-2770 on the in vitro reactivity of left pulmonary artery. CDH was performed in New Zealand rabbit fetuses (n = 10 per group) and compared to controls. Measurements of body, total and left lung weights (BW, TLW, LLW) were done. Pulmonary artery rings were pre-contracted with phenylephrine (10 μM), after which cumulative concentration-response curves to glyceryl trinitrate (GTN; NO donor), tadalafil (PDE5 inhibitor) and BAY 60-2770 (sGC activator) were obtained as well as the levels of NO (NO3/NO2). LLW, TLW and LBR were decreased in CDH (p < 0.05). In left pulmonary artery, the potency (pEC50) for GTN was markedly lower in CDH (8.25 ± 0.02 versus 9.27 ± 0.03; p < 0.01). In contrast, the potency for BAY 60-2770 was markedly greater in CDH (11.7 ± 0.03 versus 10.5 ± 0.06; p < 0.01). The NO2/NO3 levels were 62 % higher in CDH (p < 0.05). BAY 60-2770 exhibits a greater potency to relax the pulmonary artery in CDH, indicating a potential use for pulmonary hypertension in this disease.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Resource specialisation, although a fundamental component of ecological theory, is employed in disparate ways. Most definitions derive from simple counts of resource species. We build on recent advances in ecophylogenetics and null model analysis to propose a concept of specialisation that comprises affinities among resources as well as their co-occurrence with consumers. In the distance-based specialisation index (DSI), specialisation is measured as relatedness (phylogenetic or otherwise) of resources, scaled by the null expectation of random use of locally available resources. Thus, specialists use significantly clustered sets of resources, whereas generalists use over-dispersed resources. Intermediate species are classed as indiscriminate consumers. The effectiveness of this approach was assessed with differentially restricted null models, applied to a data set of 168 herbivorous insect species and their hosts. Incorporation of plant relatedness and relative abundance greatly improved specialisation measures compared to taxon counts or simpler null models, which overestimate the fraction of specialists, a problem compounded by insufficient sampling effort. This framework disambiguates the concept of specialisation with an explicit measure applicable to any mode of affinity among resource classes, and is also linked to ecological and evolutionary processes. This will enable a more rigorous deployment of ecological specialisation in empirical and theoretical studies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

To characterize the relaxation induced by BAY 41-2272 in human ureteral segments. Ureter specimens (n = 17) from multiple organ human deceased donors (mean age 40 ± 3.2 years, male/female ratio 2:1) were used to characterize the relaxing response of BAY 41-2272. Immunohistochemical analysis for endothelial and neuronal nitric oxide synthase, guanylate cyclase stimulator (sGC) and type 5 phosphodiesterase was also performed. The potency values were determined as the negative log of the molar to produce 50% of the maximal relaxation in potassium chloride-precontracted specimens. The unpaired Student t test was used for the comparisons. Immunohistochemistry revealed the presence of endothelial nitric oxide synthase in vessel endothelia and neuronal nitric oxide synthase in urothelium and nerve structures. sGC was expressed in the smooth muscle and urothelium layer, and type 5 phosphodiesterase was present in the smooth muscle only. BAY 41-2272 (0.001-100 μM) relaxed the isolated ureter in a concentration dependent manner, with a potency and maximal relaxation value of 5.82 ± 0.14 and 84% ± 5%, respectively. The addition of nitric oxide synthase and sGC inhibitors reduced the maximal relaxation values by 21% and 45%, respectively. However, the presence of sildenafil (100 nM) significantly potentiated (6.47 ± 0.10, P <.05) this response. Neither glibenclamide or tetraethylammonium nor ureteral urothelium removal influenced the relaxation response by BAY 41-2272. BAY 41-2272 relaxes the human isolated ureter in a concentration-dependent manner, mainly by activating the sGC enzyme in smooth muscle cells rather than in the urothelium, although a cyclic guanosine monophosphate-independent mechanism might have a role. The potassium channels do not seem to be involved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Atomic charge transfer-counter polarization effects determine most of the infrared fundamental CH intensities of simple hydrocarbons, methane, ethylene, ethane, propyne, cyclopropane and allene. The quantum theory of atoms in molecules/charge-charge flux-dipole flux model predicted the values of 30 CH intensities ranging from 0 to 123 km mol(-1) with a root mean square (rms) error of only 4.2 km mol(-1) without including a specific equilibrium atomic charge term. Sums of the contributions from terms involving charge flux and/or dipole flux averaged 20.3 km mol(-1), about ten times larger than the average charge contribution of 2.0 km mol(-1). The only notable exceptions are the CH stretching and bending intensities of acetylene and two of the propyne vibrations for hydrogens bound to sp hybridized carbon atoms. Calculations were carried out at four quantum levels, MP2/6-311++G(3d,3p), MP2/cc-pVTZ, QCISD/6-311++G(3d,3p) and QCISD/cc-pVTZ. The results calculated at the QCISD level are the most accurate among the four with root mean square errors of 4.7 and 5.0 km mol(-1) for the 6-311++G(3d,3p) and cc-pVTZ basis sets. These values are close to the estimated aggregate experimental error of the hydrocarbon intensities, 4.0 km mol(-1). The atomic charge transfer-counter polarization effect is much larger than the charge effect for the results of all four quantum levels. Charge transfer-counter polarization effects are expected to also be important in vibrations of more polar molecules for which equilibrium charge contributions can be large.