915 resultados para Conditional Directed Graph
Resumo:
We have been investigating the cryptographical properties of in nite families of simple graphs of large girth with the special colouring of vertices during the last 10 years. Such families can be used for the development of cryptographical algorithms (on symmetric or public key modes) and turbocodes in error correction theory. Only few families of simple graphs of large unbounded girth and arbitrarily large degree are known. The paper is devoted to the more general theory of directed graphs of large girth and their cryptographical applications. It contains new explicit algebraic constructions of in finite families of such graphs. We show that they can be used for the implementation of secure and very fast symmetric encryption algorithms. The symbolic computations technique allow us to create a public key mode for the encryption scheme based on algebraic graphs.
Resumo:
The first part of this paper deals with an extension of Dirac's Theorem to directed graphs. It is related to a result often referred to as the Ghouila-Houri Theorem. Here we show that the requirement of being strongly connected in the hypothesis of the Ghouila-Houri Theorem is redundant. The Second part of the paper shows that a condition on the number of edges for a graph to be hamiltonian implies Ore's condition on the degrees of the vertices.
Resumo:
All teachers participate in self-directed professional development (PD) at some point in their careers; however, the degree to which this participation takes place varies greatly from teacher to teacher and is influenced by the leadership of the school principal. The motivation behind why teachers choose to engage in PD is an important construct. Therefore, there is a need for better understanding of the leader’s role with respect to how and why teachers engage in self-directed professional development. The purpose of the research was to explore the elementary teachers’ motivation for and the school principal’s influence on their engagement in self-directed professional development. Three research questions guided this study: 1. What motivates teachers to engage in self-directed professional development? 2. What are the conditions necessary for promoting teachers’ engagement in self-directed professional development? 3. What are teachers’ perceptions of the principal’s role in supporting, fostering, encouraging, and sustaining the professional development of teachers? A qualitative research approach was adopted for this study. Six elementary teachers from one south-eastern Ontario school board, consisting of three novice and three more experienced teachers, provided their responses to a consistent complement of 14 questions. Their responses were documented via individual interviews, transcribed verbatim, and thematically analysed. The findings suggested that, coupled with the individual motivating influences, the culture of the school was found to be a conditional dynamic that either stimulated or dissuaded participation in self-directed PD. The school principal provided an additional catalyst or deterrence via relational disposition. When teachers felt their needs for competency, relatedness, and autonomy were satisfied, the conditions necessary to motivate teachers to engage in PD were fulfilled. A principal who personified the tenets of transformational leadership served to facilitate teachers’ inclinations to take on PD. A leadership style that was collaborative and trustful and allowed for personal autonomy was a dominant foundational piece that was critical for participant participation in self-directed PD. Finally, the principals were found to positively impact school climate by partaking in PD alongside teachers and ensuring there was a shared vision of the school so that teachers could tailor PD to parallel school interests.
Resumo:
Verbal fluency is the ability to produce a satisfying sequence of spoken words during a given time interval. The core of verbal fluency lies in the capacity to manage the executive aspects of language. The standard scores of the semantic verbal fluency test are broadly used in the neuropsychological assessment of the elderly, and different analytical methods are likely to extract even more information from the data generated in this test. Graph theory, a mathematical approach to analyze relations between items, represents a promising tool to understand a variety of neuropsychological states. This study reports a graph analysis of data generated by the semantic verbal fluency test by cognitively healthy elderly (NC), patients with Mild Cognitive Impairment – subtypes amnestic(aMCI) and amnestic multiple domain (a+mdMCI) - and patients with Alzheimer’s disease (AD). Sequences of words were represented as a speech graph in which every word corresponded to a node and temporal links between words were represented by directed edges. To characterize the structure of the data we calculated 13 speech graph attributes (SGAs). The individuals were compared when divided in three (NC – MCI – AD) and four (NC – aMCI – a+mdMCI – AD) groups. When the three groups were compared, significant differences were found in the standard measure of correct words produced, and three SGA: diameter, average shortest path, and network density. SGA sorted the elderly groups with good specificity and sensitivity. When the four groups were compared, the groups differed significantly in network density, except between the two MCI subtypes and NC and aMCI. The diameter of the network and the average shortest path were significantly different between the NC and AD, and between aMCI and AD. SGA sorted the elderly in their groups with good specificity and sensitivity, performing better than the standard score of the task. These findings provide support for a new methodological frame to assess the strength of semantic memory through the verbal fluency task, with potential to amplify the predictive power of this test. Graph analysis is likely to become clinically relevant in neurology and psychiatry, and may be particularly useful for the differential diagnosis of the elderly.
Resumo:
Abstract Scheduling problems are generally NP-hard combinatorial problems, and a lot of research has been done to solve these problems heuristically. However, most of the previous approaches are problem-specific and research into the development of a general scheduling algorithm is still in its infancy. Mimicking the natural evolutionary process of the survival of the fittest, Genetic Algorithms (GAs) have attracted much attention in solving difficult scheduling problems in recent years. Some obstacles exist when using GAs: there is no canonical mechanism to deal with constraints, which are commonly met in most real-world scheduling problems, and small changes to a solution are difficult. To overcome both difficulties, indirect approaches have been presented (in [1] and [2]) for nurse scheduling and driver scheduling, where GAs are used by mapping the solution space, and separate decoding routines then build solutions to the original problem. In our previous indirect GAs, learning is implicit and is restricted to the efficient adjustment of weights for a set of rules that are used to construct schedules. The major limitation of those approaches is that they learn in a non-human way: like most existing construction algorithms, once the best weight combination is found, the rules used in the construction process are fixed at each iteration. However, normally a long sequence of moves is needed to construct a schedule and using fixed rules at each move is thus unreasonable and not coherent with human learning processes. When a human scheduler is working, he normally builds a schedule step by step following a set of rules. After much practice, the scheduler gradually masters the knowledge of which solution parts go well with others. He can identify good parts and is aware of the solution quality even if the scheduling process is not completed yet, thus having the ability to finish a schedule by using flexible, rather than fixed, rules. In this research we intend to design more human-like scheduling algorithms, by using ideas derived from Bayesian Optimization Algorithms (BOA) and Learning Classifier Systems (LCS) to implement explicit learning from past solutions. BOA can be applied to learn to identify good partial solutions and to complete them by building a Bayesian network of the joint distribution of solutions [3]. A Bayesian network is a directed acyclic graph with each node corresponding to one variable, and each variable corresponding to individual rule by which a schedule will be constructed step by step. The conditional probabilities are computed according to an initial set of promising solutions. Subsequently, each new instance for each node is generated by using the corresponding conditional probabilities, until values for all nodes have been generated. Another set of rule strings will be generated in this way, some of which will replace previous strings based on fitness selection. If stopping conditions are not met, the Bayesian network is updated again using the current set of good rule strings. The algorithm thereby tries to explicitly identify and mix promising building blocks. It should be noted that for most scheduling problems the structure of the network model is known and all the variables are fully observed. In this case, the goal of learning is to find the rule values that maximize the likelihood of the training data. Thus learning can amount to 'counting' in the case of multinomial distributions. In the LCS approach, each rule has its strength showing its current usefulness in the system, and this strength is constantly assessed [4]. To implement sophisticated learning based on previous solutions, an improved LCS-based algorithm is designed, which consists of the following three steps. The initialization step is to assign each rule at each stage a constant initial strength. Then rules are selected by using the Roulette Wheel strategy. The next step is to reinforce the strengths of the rules used in the previous solution, keeping the strength of unused rules unchanged. The selection step is to select fitter rules for the next generation. It is envisaged that the LCS part of the algorithm will be used as a hill climber to the BOA algorithm. This is exciting and ambitious research, which might provide the stepping-stone for a new class of scheduling algorithms. Data sets from nurse scheduling and mall problems will be used as test-beds. It is envisaged that once the concept has been proven successful, it will be implemented into general scheduling algorithms. It is also hoped that this research will give some preliminary answers about how to include human-like learning into scheduling algorithms and may therefore be of interest to researchers and practitioners in areas of scheduling and evolutionary computation. References 1. Aickelin, U. and Dowsland, K. (2003) 'Indirect Genetic Algorithm for a Nurse Scheduling Problem', Computer & Operational Research (in print). 2. Li, J. and Kwan, R.S.K. (2003), 'Fuzzy Genetic Algorithm for Driver Scheduling', European Journal of Operational Research 147(2): 334-344. 3. Pelikan, M., Goldberg, D. and Cantu-Paz, E. (1999) 'BOA: The Bayesian Optimization Algorithm', IlliGAL Report No 99003, University of Illinois. 4. Wilson, S. (1994) 'ZCS: A Zeroth-level Classifier System', Evolutionary Computation 2(1), pp 1-18.
Resumo:
Verbal fluency is the ability to produce a satisfying sequence of spoken words during a given time interval. The core of verbal fluency lies in the capacity to manage the executive aspects of language. The standard scores of the semantic verbal fluency test are broadly used in the neuropsychological assessment of the elderly, and different analytical methods are likely to extract even more information from the data generated in this test. Graph theory, a mathematical approach to analyze relations between items, represents a promising tool to understand a variety of neuropsychological states. This study reports a graph analysis of data generated by the semantic verbal fluency test by cognitively healthy elderly (NC), patients with Mild Cognitive Impairment – subtypes amnestic(aMCI) and amnestic multiple domain (a+mdMCI) - and patients with Alzheimer’s disease (AD). Sequences of words were represented as a speech graph in which every word corresponded to a node and temporal links between words were represented by directed edges. To characterize the structure of the data we calculated 13 speech graph attributes (SGAs). The individuals were compared when divided in three (NC – MCI – AD) and four (NC – aMCI – a+mdMCI – AD) groups. When the three groups were compared, significant differences were found in the standard measure of correct words produced, and three SGA: diameter, average shortest path, and network density. SGA sorted the elderly groups with good specificity and sensitivity. When the four groups were compared, the groups differed significantly in network density, except between the two MCI subtypes and NC and aMCI. The diameter of the network and the average shortest path were significantly different between the NC and AD, and between aMCI and AD. SGA sorted the elderly in their groups with good specificity and sensitivity, performing better than the standard score of the task. These findings provide support for a new methodological frame to assess the strength of semantic memory through the verbal fluency task, with potential to amplify the predictive power of this test. Graph analysis is likely to become clinically relevant in neurology and psychiatry, and may be particularly useful for the differential diagnosis of the elderly.
Resumo:
Abstract Scheduling problems are generally NP-hard combinatorial problems, and a lot of research has been done to solve these problems heuristically. However, most of the previous approaches are problem-specific and research into the development of a general scheduling algorithm is still in its infancy. Mimicking the natural evolutionary process of the survival of the fittest, Genetic Algorithms (GAs) have attracted much attention in solving difficult scheduling problems in recent years. Some obstacles exist when using GAs: there is no canonical mechanism to deal with constraints, which are commonly met in most real-world scheduling problems, and small changes to a solution are difficult. To overcome both difficulties, indirect approaches have been presented (in [1] and [2]) for nurse scheduling and driver scheduling, where GAs are used by mapping the solution space, and separate decoding routines then build solutions to the original problem. In our previous indirect GAs, learning is implicit and is restricted to the efficient adjustment of weights for a set of rules that are used to construct schedules. The major limitation of those approaches is that they learn in a non-human way: like most existing construction algorithms, once the best weight combination is found, the rules used in the construction process are fixed at each iteration. However, normally a long sequence of moves is needed to construct a schedule and using fixed rules at each move is thus unreasonable and not coherent with human learning processes. When a human scheduler is working, he normally builds a schedule step by step following a set of rules. After much practice, the scheduler gradually masters the knowledge of which solution parts go well with others. He can identify good parts and is aware of the solution quality even if the scheduling process is not completed yet, thus having the ability to finish a schedule by using flexible, rather than fixed, rules. In this research we intend to design more human-like scheduling algorithms, by using ideas derived from Bayesian Optimization Algorithms (BOA) and Learning Classifier Systems (LCS) to implement explicit learning from past solutions. BOA can be applied to learn to identify good partial solutions and to complete them by building a Bayesian network of the joint distribution of solutions [3]. A Bayesian network is a directed acyclic graph with each node corresponding to one variable, and each variable corresponding to individual rule by which a schedule will be constructed step by step. The conditional probabilities are computed according to an initial set of promising solutions. Subsequently, each new instance for each node is generated by using the corresponding conditional probabilities, until values for all nodes have been generated. Another set of rule strings will be generated in this way, some of which will replace previous strings based on fitness selection. If stopping conditions are not met, the Bayesian network is updated again using the current set of good rule strings. The algorithm thereby tries to explicitly identify and mix promising building blocks. It should be noted that for most scheduling problems the structure of the network model is known and all the variables are fully observed. In this case, the goal of learning is to find the rule values that maximize the likelihood of the training data. Thus learning can amount to 'counting' in the case of multinomial distributions. In the LCS approach, each rule has its strength showing its current usefulness in the system, and this strength is constantly assessed [4]. To implement sophisticated learning based on previous solutions, an improved LCS-based algorithm is designed, which consists of the following three steps. The initialization step is to assign each rule at each stage a constant initial strength. Then rules are selected by using the Roulette Wheel strategy. The next step is to reinforce the strengths of the rules used in the previous solution, keeping the strength of unused rules unchanged. The selection step is to select fitter rules for the next generation. It is envisaged that the LCS part of the algorithm will be used as a hill climber to the BOA algorithm. This is exciting and ambitious research, which might provide the stepping-stone for a new class of scheduling algorithms. Data sets from nurse scheduling and mall problems will be used as test-beds. It is envisaged that once the concept has been proven successful, it will be implemented into general scheduling algorithms. It is also hoped that this research will give some preliminary answers about how to include human-like learning into scheduling algorithms and may therefore be of interest to researchers and practitioners in areas of scheduling and evolutionary computation. References 1. Aickelin, U. and Dowsland, K. (2003) 'Indirect Genetic Algorithm for a Nurse Scheduling Problem', Computer & Operational Research (in print). 2. Li, J. and Kwan, R.S.K. (2003), 'Fuzzy Genetic Algorithm for Driver Scheduling', European Journal of Operational Research 147(2): 334-344. 3. Pelikan, M., Goldberg, D. and Cantu-Paz, E. (1999) 'BOA: The Bayesian Optimization Algorithm', IlliGAL Report No 99003, University of Illinois. 4. Wilson, S. (1994) 'ZCS: A Zeroth-level Classifier System', Evolutionary Computation 2(1), pp 1-18.
Resumo:
Rapidity-odd directed flow (v1) measurements for charged pions, protons, and antiprotons near midrapidity (y=0) are reported in sNN=7.7, 11.5, 19.6, 27, 39, 62.4, and 200 GeV Au+Au collisions as recorded by the STAR detector at the Relativistic Heavy Ion Collider. At intermediate impact parameters, the proton and net-proton slope parameter dv1/dy|y=0 shows a minimum between 11.5 and 19.6 GeV. In addition, the net-proton dv1/dy|y=0 changes sign twice between 7.7 and 39 GeV. The proton and net-proton results qualitatively resemble predictions of a hydrodynamic model with a first-order phase transition from hadronic matter to deconfined matter, and differ from hadronic transport calculations.
Resumo:
Aspergillus fumigatus is a common mould whose spores are a component of the normal airborne flora. Immune dysfunction permits developmental growth of inhaled spores in the human lung causing aspergillosis, a significant threat to human health in the form of allergic, and life-threatening invasive infections. The success of A. fumigatus as a pathogen is unique among close phylogenetic relatives and is poorly characterised at the molecular level. Recent genome sequencing of several Aspergillus species provides an exceptional opportunity to analyse fungal virulence attributes within a genomic and evolutionary context. To identify genes preferentially expressed during adaptation to the mammalian host niche, we generated multiple gene expression profiles from minute samplings of A. fumigatus germlings during initiation of murine infection. They reveal a highly co-ordinated A. fumigatus gene expression programme, governing metabolic and physiological adaptation, which allows the organism to prosper within the mammalian niche. As functions of phylogenetic conservation and genetic locus, 28% and 30%, respectively, of the A. fumigatus subtelomeric and lineage-specific gene repertoires are induced relative to laboratory culture, and physically clustered genes including loci directing pseurotin, gliotoxin and siderophore biosyntheses are a prominent feature. Locationally biased A. fumigatus gene expression is not prompted by in vitro iron limitation, acid, alkaline, anaerobic or oxidative stress. However, subtelomeric gene expression is favoured following ex vivo neutrophil exposure and in comparative analyses of richly and poorly nourished laboratory cultured germlings. We found remarkable concordance between the A. fumigatus host-adaptation transcriptome and those resulting from in vitro iron depletion, alkaline shift, nitrogen starvation and loss of the methyltransferase LaeA. This first transcriptional snapshot of a fungal genome during initiation of mammalian infection provides the global perspective required to direct much-needed diagnostic and therapeutic strategies and reveals genome organisation and subtelomeric diversity as potential driving forces in the evolution of pathogenicity in the genus Aspergillus.
Resumo:
In Apis mellifera, hygienic behavior involves recognition and removal of sick, damaged or dead brood from capped cells. We investigated whether bees react in the same way to grouped versus isolated damaged capped brood cells. Three colonies of wild-type Africanized honey bees and three colonies of Carniolan honey bees were used for this investigation. Capped worker brood cells aged 12 to 14 days old were perforated with the pin-killing method. After making holes in the brood cells, the combs were placed back into the hives; 24 h later the number of cleaned cells was recorded in areas with pin-killed and control brood cells. Four repetitions were made in each colony. Isolated cells were more frequently cleaned than grouped cells, though variance analysis showed no significant difference (P = 0.1421). Carniolan bees also were somewhat, though not significantly more hygienic than Africanized honey bees (P = 0.0840). We conclude that honey bees can detect and remove both isolated and grouped dead brood. The tendency towards greater hygienic efficiency directed towards grouped pin-killed brood may be a consequence of a greater concentration of volatiles emanating from the wounds in the dead pupae.
Resumo:
We measure directed flow (v(1)) for charged particles in Au + Au and Cu + Cu collisions at root s(NN) = 200 and 62.4 GeV, as a function of pseudorapidity (eta), transverse momentum (p(t)), and collision centrality, based on data from the STAR experiment. We find that the directed flow depends on the incident energy but, contrary to all available model implementations, not on the size of the colliding system at a given centrality. We extend the validity of the limiting fragmentation concept to v(1) in different collision systems, and investigate possible explanations for the observed sign change in v(1)(p(t)).
Resumo:
Fluctuations in the initial geometry of a nucleus-nucleus collision have been recently shown to result in a new type of directed flow (v(1)) that, unlike the usual directed flow, is also present at midrapidity. We compute this new v(1) versus transverse momentum and centrality for Au-Au collisions at RHIC using the hydrodynamic code NeXSPheRIO. We find that the event plane of v(1) is correlated with the angle of the initial dipole of the distribution, as predicted, though with a large dispersion. It is uncorrelated with the reaction plane. Our results are in excellent agreement with results inferred from STAR correlation data.
Resumo:
We consider the problem of interaction neighborhood estimation from the partial observation of a finite number of realizations of a random field. We introduce a model selection rule to choose estimators of conditional probabilities among natural candidates. Our main result is an oracle inequality satisfied by the resulting estimator. We use then this selection rule in a two-step procedure to evaluate the interacting neighborhoods. The selection rule selects a small prior set of possible interacting points and a cutting step remove from this prior set the irrelevant points. We also prove that the Ising models satisfy the assumptions of the main theorems, without restrictions on the temperature, on the structure of the interacting graph or on the range of the interactions. It provides therefore a large class of applications for our results. We give a computationally efficient procedure in these models. We finally show the practical efficiency of our approach in a simulation study.
Resumo:
A broader characterization of industrial wastewaters, especially in respect to hazardous compounds and their potential toxicity, is often necessary in order to determine the best practical treatment (or pretreatment) technology available to reduce the discharge of harmful pollutants to the environment or publicly owned treatment works. Using a toxicity-directed approach, this paper sets the base for a rational treatability study of polyester resin manufacturing. Relevant physical and chemical characteristics were determined. Respirometry was used for toxicity reduction evaluation after physical and chemical effluent fractionation. Of all the procedures investigated, only air stripping was significantly effective in reducing wastewater toxicity. Air stripping in pH 7 reduced toxicity in 18.2%, while in pH 11 a toxicity reduction of 62.5% was observed. Results indicated that toxicants responsible for the most significant fraction of the effluent`s instantaneous toxic effect to unadapted activated sludge were organic compounds poorly or not volatilized in acid conditions. These results led to useful directions for conducting treatability studies which will be grounded on actual effluent properties rather than empirical or based on the rare specific data on this kind of industrial wastewater. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
In this paper a bond graph methodology is used to model incompressible fluid flows with viscous and thermal effects. The distinctive characteristic of these flows is the role of pressure, which does not behave as a state variable but as a function that must act in such a way that the resulting velocity field has divergence zero. Velocity and entropy per unit volume are used as independent variables for a single-phase, single-component flow. Time-dependent nodal values and interpolation functions are introduced to represent the flow field, from which nodal vectors of velocity and entropy are defined as state variables. The system for momentum and continuity equations is coincident with the one obtained by using the Galerkin method for the weak formulation of the problem in finite elements. The integral incompressibility constraint is derived based on the integral conservation of mechanical energy. The weak formulation for thermal energy equation is modeled with true bond graph elements in terms of nodal vectors of temperature and entropy rates, resulting a Petrov-Galerkin method. The resulting bond graph shows the coupling between mechanical and thermal energy domains through the viscous dissipation term. All kind of boundary conditions are handled consistently and can be represented as generalized effort or flow sources. A procedure for causality assignment is derived for the resulting graph, satisfying the Second principle of Thermodynamics. (C) 2007 Elsevier B.V. All rights reserved.