947 resultados para Dependent variable problem


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents our work on analysing the high level search within a graph based hyperheuristic. The graph based hyperheuristic solves the problem at a higher level by searching through permutations of graph heuristics rather than the actual solutions. The heuristic permutations are then used to construct the solutions. Variable Neighborhood Search, Steepest Descent, Iterated Local Search and Tabu Search are compared. An analysis of their performance within the high level search space of heuristics is also carried out. Experimental results on benchmark exam timetabling problems demonstrate the simplicity and efficiency of this hyperheuristic approach. They also indicate that the choice of the high level search methodology is not crucial and the high level search should explore the heuristic search space as widely as possible within a limited searching time. This simple and general graph based hyperheuristic may be applied to a range of timetabling and optimisation problems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract: This paper reports a lot-sizing and scheduling problem, which minimizes inventory and backlog costs on m parallel machines with sequence-dependent set-up times over t periods. Problem solutions are represented as product subsets ordered and/or unordered for each machine m at each period t. The optimal lot sizes are determined applying a linear program. A genetic algorithm searches either over ordered or over unordered subsets (which are implicitly ordered using a fast ATSP-type heuristic) to identify an overall optimal solution. Initial computational results are presented, comparing the speed and solution quality of the ordered and unordered genetic algorithm approaches.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract- A Bayesian optimization algorithm for the nurse scheduling problem is presented, which involves choosing a suitable scheduling rule from a set for each nurse's assignment. Unlike our previous work that used GAs to implement implicit learning, the learning in the proposed algorithm is explicit, i.e. eventually, we will be able to identify and mix building blocks directly. The Bayesian optimization algorithm is applied to implement such explicit learning by building a Bayesian network of the joint distribution of solutions. The conditional probability of each variable in the network is computed according to an initial set of promising solutions. Subsequently, each new instance for each variable is generated by using the corresponding conditional probabilities, until all variables have been generated, i.e. in our case, a new rule string has been obtained. Another set of rule strings will be generated in this way, some of which will replace previous strings based on fitness selection. If stopping conditions are not met, the conditional probabilities for all nodes in the Bayesian network are updated again using the current set of promising rule strings. Computational results from 52 real data instances demonstrate the success of this approach. It is also suggested that the learning mechanism in the proposed approach might be suitable for other scheduling problems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Children who have experienced a traumatic brain injury (TBI) are at risk for a variety of maladaptive cognitive, behavioral and social outcomes (Yeates et al., 2007). Research involving the social problem solving (SPS) abilities of children with TBI indicates a preference for lower level strategies when compared to children who have experienced an orthopedic injury (OI; Hanten et al., 2008, 2011). Research on SPS in non-injured populations has highlighted the significance of the identity of the social partner (Rubin et al., 2006). Within the pediatric TBI literature few studies have utilized friends as the social partner in SPS contexts, and fewer have used in-vivo SPS assessments. The current study aimed to build on existing research of SPS in children with TBI by utilizing an observational coding scheme to capture in-vivo problem solving behaviors between children with TBI and a best friend. The current study included children with TBI (n = 41), children with OI (n = 43), and a non-injured typically developing group (n = 41). All participants were observed completing a task with a friend and completed a measure of friendship quality. SPS was assessed using an observational coding scheme that captured SPS goals, strategies, and outcomes. It was expected children with TBI would produce fewer successes, fewer direct strategies, and more avoidant strategies. ANOVAs tested for group differences in SPS successes, direct strategies and avoidant strategies. Analyses were run to see if positive or negative friendship quality moderated the relation between group type and SPS behaviors. Group differences were found between the TBI and non-injured group in the SPS direct strategy of commands. No group differences were found for other SPS outcome variables of interest. Moderation analyses partially supported study hypotheses regarding the effect of friendship quality as a moderator variable. Additional analyses examined SPS goal-strategy sequencing and grouped SPS goals into high cost and low cost categories. Results showed a trend supporting the hypothesis that children with TBI had fewer SPS successes, especially with high cost goals, compared to the other two groups. Findings were discussed highlighting the moderation results involving children with severe TBI.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Variable Data Printing (VDP) has brought new flexibility and dynamism to the printed page. Each printed instance of a specific class of document can now have different degrees of customized content within the document template. This flexibility comes at a cost. If every printed page is potentially different from all others it must be rasterized separately, which is a time-consuming process. Technologies such as PPML (Personalized Print Markup Language) attempt to address this problem by dividing the bitmapped page into components that can be cached at the raster level, thereby speeding up the generation of page instances. A large number of documents are stored in Page Description Languages at a higher level of abstraction than the bitmapped page. Much of this content could be reused within a VDP environment provided that separable document components can be identified and extracted. These components then need to be individually rasterisable so that each high-level component can be related to its low-level (bitmap) equivalent. Unfortunately, the unstructured nature of most Page Description Languages makes it difficult to extract content easily. This paper outlines the problems encountered in extracting component-based content from existing page description formats, such as PostScript, PDF and SVG, and how the differences between the formats affects the ease with which content can be extracted. The techniques are illustrated with reference to a tool called COG Extractor, which extracts content from PDF and SVG and prepares it for reuse.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dissertação (mestrado)—UnB/UFPB/UFRN, Programa MultiInstitucional e Inter-Regional de Pós-Graduação em Ciências Contábeis, 2016.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A Bayesian optimization algorithm for the nurse scheduling problem is presented, which involves choosing a suitable scheduling rule from a set for each nurse’s assignment. Unlike our previous work that used GAs to implement implicit learning, the learning in the proposed algorithm is explicit, i.e. eventually, we will be able to identify and mix building blocks directly. The Bayesian optimization algorithm is applied to implement such explicit learning by building a Bayesian network of the joint distribution of solutions. The conditional probability of each variable in the network is computed according to an initial set of promising solutions. Subsequently, each new instance for each variable is generated by using the corresponding conditional probabilities, until all variables have been generated, i.e. in our case, a new rule string has been obtained. Another set of rule strings will be generated in this way, some of which will replace previous strings based on fitness selection. If stopping conditions are not met, the conditional probabilities for all nodes in the Bayesian network are updated again using the current set of promising rule strings. Computational results from 52 real data instances demonstrate the success of this approach. It is also suggested that the learning mechanism in the proposed approach might be suitable for other scheduling problems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nous adaptons une heuristique de recherche à voisinage variable pour traiter le problème du voyageur de commerce avec fenêtres de temps (TSPTW) lorsque l'objectif est la minimisation du temps d'arrivée au dépôt de destination. Nous utilisons des méthodes efficientes pour la vérification de la réalisabilité et de la rentabilité d'un mouvement. Nous explorons les voisinages dans des ordres permettant de réduire l'espace de recherche. La méthode résultante est compétitive avec l'état de l'art. Nous améliorons les meilleures solutions connues pour deux classes d'instances et nous fournissons les résultats de plusieurs instances du TSPTW pour la première fois.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nous adaptons une heuristique de recherche à voisinage variable pour traiter le problème du voyageur de commerce avec fenêtres de temps (TSPTW) lorsque l'objectif est la minimisation du temps d'arrivée au dépôt de destination. Nous utilisons des méthodes efficientes pour la vérification de la réalisabilité et de la rentabilité d'un mouvement. Nous explorons les voisinages dans des ordres permettant de réduire l'espace de recherche. La méthode résultante est compétitive avec l'état de l'art. Nous améliorons les meilleures solutions connues pour deux classes d'instances et nous fournissons les résultats de plusieurs instances du TSPTW pour la première fois.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We report the observation of the insulator-to-metal transition in crystalline silicon samples supersaturated with vanadium. Ion implantation followed by pulsed laser melting and rapid resolidification produce high quality single-crystalline silicon samples with vanadium concentrations that exceed equilibrium values in more than 5 orders of magnitude. Temperature-dependent analysis of the conductivity and Hall mobility values for temperatures from 10K to 300K indicate that a transition from an insulating to a metallic phase is obtained at a vanadium concentration between 1.1 × 10^(20) and 1.3 × 10^(21) cm^(−3) . Samples in the insulating phase present a variable-range hopping transport mechanism with a Coulomb gap at the Fermi energy level. Electron wave function localization length increases from 61 to 82 nm as the vanadium concentration increases in the films, supporting the theory of impurity band merging from delocalization of levels states. On the metallic phase, electronic transport present a dispersion mechanism related with the Kondo effect, suggesting the presence of local magnetic moments in the vanadium supersaturated silicon material.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we prove well-posedness for a measure-valued continuity equation with solution-dependent velocity and flux boundary conditions, posed on a bounded one-dimensional domain. We generalize the results of an earlier paper [J. Differential Equations, 259 (2015), pp. 10681097] to settings where the dynamics are driven by interactions. In a forward-Euler-like approach, we construct a time-discretized version of the original problem and employ those results as a building block within each subinterval. A limit solution is obtained as the mesh size of the time discretization goes to zero. Moreover, the limit is independent of the specific way of partitioning the time interval [0, T]. This paper is partially based on results presented in Chapter 5 of [Evolution Equations for Systems Governed by Social Interactions, Ph.D. thesis, Eindhoven University of Technology, 2015], while a number of issues that were still open there are now resolved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A specific modified constitutive equation for a third-grade fluid is proposed so that the model be suitable for applications where shear-thinning or shear-thickening may occur. For that, we use the Cosserat theory approach reducing the exact three-dimensional equations to a system depending only on time and on a single spatial variable. This one-dimensional system is obtained by integrating the linear momentum equation over the cross-section of the tube, taking a velocity field approximation provided by the Cosserat theory. From this reduced system, we obtain the unsteady equations for the wall shear stress and mean pressure gradient depending on the volume flow rate, Womersley number, viscoelastic coefficient and flow index over a finite section of the tube geometry with constant circular cross-section.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this study, we carried out a comparative analysis between two classical methodologies to prospect residue contacts in proteins: the traditional cutoff dependent (CD) approach and cutoff free Delaunay tessellation (DT). In addition, two alternative coarse-grained forms to represent residues were tested: using alpha carbon (CA) and side chain geometric center (GC). A database was built, comprising three top classes: all alpha, all beta, and alpha/beta. We found that the cutoff value? at about 7.0 A emerges as an important distance parameter.? Up to 7.0 A, CD and DT properties are unified, which implies that at this distance all contacts are complete and legitimate (not occluded). We also have shown that DT has an intrinsic missing edges problem when mapping the first layer of neighbors. In proteins, it may produce systematic errors affecting mainly the contact network in beta chains with CA. The almost-Delaunay (AD) approach has been proposed to solve this DT problem. We found that even AD may not be an advantageous solution. As a consequence, in the strict range up ? to 7.0 A, the CD approach revealed to be a simpler, more complete, and reliable technique than DT or AD. Finally, we have shown that coarse-grained residue representations may introduce bias in the analysis of neighbors in cutoffs up to ? 6.8 A, with CA favoring alpha proteins and GC favoring beta proteins. This provides an additional argument pointing to ? the value of 7.0 A as an important lower bound cutoff to be used in contact analysis of proteins.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis project studies the agent identity privacy problem in the scalar linear quadratic Gaussian (LQG) control system. For the agent identity privacy problem in the LQG control, privacy models and privacy measures have to be established first. It depends on a trajectory of correlated data rather than a single observation. I propose here privacy models and the corresponding privacy measures by taking into account the two characteristics. The agent identity is a binary hypothesis: Agent A or Agent B. An eavesdropper is assumed to make a hypothesis testing on the agent identity based on the intercepted environment state sequence. The privacy risk is measured by the Kullback-Leibler divergence between the probability distributions of state sequences under two hypotheses. By taking into account both the accumulative control reward and privacy risk, an optimization problem of the policy of Agent B is formulated. The optimal deterministic privacy-preserving LQG policy of Agent B is a linear mapping. A sufficient condition is given to guarantee that the optimal deterministic privacy-preserving policy is time-invariant in the asymptotic regime. An independent Gaussian random variable cannot improve the performance of Agent B. The numerical experiments justify the theoretic results and illustrate the reward-privacy trade-off. Based on the privacy model and the LQG control model, I have formulated the mathematical problems for the agent identity privacy problem in LQG. The formulated problems address the two design objectives: to maximize the control reward and to minimize the privacy risk. I have conducted theoretic analysis on the LQG control policy in the agent identity privacy problem and the trade-off between the control reward and the privacy risk.Finally, the theoretic results are justified by numerical experiments. From the numerical results, I expected to have some interesting observations and insights, which are explained in the last chapter.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The present study investigated the effects of running at 0.8 or 1.2 km/h on inflammatory proteins (i.e., protein levels of TNF- α , IL-1 β , and NF- κ B) and metabolic proteins (i.e., protein levels of SIRT-1 and PGC-1 α , and AMPK phosphorylation) in quadriceps of rats. Male Wistar rats at 3 (young) and 18 months (middle-aged rats) of age were divided into nonexercised (NE) and exercised at 0.8 or 1.2 km/h. The rats were trained on treadmill, 50 min per day, 5 days per week, during 8 weeks. Forty-eight hours after the last training session, muscles were removed, homogenized, and analyzed using biochemical and western blot techniques. Our results showed that: (a) running at 0.8 km/h decreased the inflammatory proteins and increased the metabolic proteins compared with NE rats; (b) these responses were lower for the inflammatory proteins and higher for the metabolic proteins in young rats compared with middle-aged rats; (c) running at 1.2 km/h decreased the inflammatory proteins and increased the metabolic proteins compared with 0.8 km/h; (d) these responses were similar between young and middle-aged rats when trained at 1.2 km. In summary, the age-related increases in inflammatory proteins, and the age-related declines in metabolic proteins can be reversed and largely improved by treadmill training.