917 resultados para Combinatorial Grassmannian


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Ehrenfeuchtin–Silbergerin ongelma kysyy, kuinka pitkä sana voi olla sen pisimpien reunattomien tekijöiden pituuden suhteen, ennen kuin sillä on sen lyhimmän jakson pituinen reunaton tekijä. Ongelman ratkaisu on sanan pisimpien reunattomien tekijöiden pituudesta riippuva raja, jolle kaikilla ainakin tämän pituisilla sanoilla on välttämättä lyhimmän jakson pituinen reunaton tekijä. Tutkielmassa esitellään tämän ongelman paras tunnettu ratkaisu. Lisäksi tarkastellaan muita ongelmaan läheisesti liittyviä tuloksia. Päälauseena todistetaan paras tunnettu raja pisimpien reunattomien tekijöiden suhteen. Todistus on peräisin Štěpán Holubin ja Dirk Nowotkan artikkelista The Ehrenfeucht–Silberger problem (Journal of Combinatorial Theory, Series A) sekä tämän artikkelin alustavasta versiosta ICALP 2009 -konferenssin proceedings-julkaisussa. Esitetty ratkaisu näytetään vakiotermiä vaille optimaaliseksi vertaamalla sitä parhaaseen tunnettuun esimerkkiin äärettömästä sanajoukosta, jonka jokaisen sanan pisimmät reunattomat tekijät ovat lyhyempiä kuin lyhin jakso ja jonka jokaisen sanan pituus pisimpien reunattomien tekijöiden pituuden suhteen on suurin tunnettu. Johdatteluna esitellään perustuloksia sanan jaksoista ja reunattomista tekijöistä sekä esitellään eräitä muita ehtoja sille, milloin sanalla on sen lyhimmän jakson pituinen reunaton tekijä. Toisaalta tarkastellaan myös ongelmaa, joka kysyy vastaavaa rajaa lyhimmän jakson suhteen. Uutena tuloksena parannetaan parasta aiemmin tunnettua rajaa yhtä pienemmäksi, jolloin saatu raja on optimaalinen. Lisäksi todistetaan, mitä muotoa ovat kaikki sanat, joilla ei ole lyhimmän jaksonsa pituista reunatonta tekijää ja jotka ovat lyhimmän jaksonsa suhteen mahdollisimman pitkiä. Lisäksi tarkastellaan kriittistä tekijöihinjakoa, joka liittää sanan lyhimmän jakson sen paikallisiin jaksoihin. Kriittisen tekijöihinjaon lauseesta esitetään eräs todistus. Tämän lisäksi todistetaan päälauseen todistuksessa tarvittava lemma, joka liittyy sanan konjugaatin tekijöihinjaon kriittisyyteen.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis presents approximation algorithms for some NP-Hard combinatorial optimization problems on graphs and networks; in particular, we study problems related to Network Design. Under the widely-believed complexity-theoretic assumption that P is not equal to NP, there are no efficient (i.e., polynomial-time) algorithms that solve these problems exactly. Hence, if one desires efficient algorithms for such problems, it is necessary to consider approximate solutions: An approximation algorithm for an NP-Hard problem is a polynomial time algorithm which, for any instance of the problem, finds a solution whose value is guaranteed to be within a multiplicative factor of the value of an optimal solution to that instance. We attempt to design algorithms for which this factor, referred to as the approximation ratio of the algorithm, is as small as possible. The field of Network Design comprises a large class of problems that deal with constructing networks of low cost and/or high capacity, routing data through existing networks, and many related issues. In this thesis, we focus chiefly on designing fault-tolerant networks. Two vertices u,v in a network are said to be k-edge-connected if deleting any set of k − 1 edges leaves u and v connected; similarly, they are k-vertex connected if deleting any set of k − 1 other vertices or edges leaves u and v connected. We focus on building networks that are highly connected, meaning that even if a small number of edges and nodes fail, the remaining nodes will still be able to communicate. A brief description of some of our results is given below. We study the problem of building 2-vertex-connected networks that are large and have low cost. Given an n-node graph with costs on its edges and any integer k, we give an O(log n log k) approximation for the problem of finding a minimum-cost 2-vertex-connected subgraph containing at least k nodes. We also give an algorithm of similar approximation ratio for maximizing the number of nodes in a 2-vertex-connected subgraph subject to a budget constraint on the total cost of its edges. Our algorithms are based on a pruning process that, given a 2-vertex-connected graph, finds a 2-vertex-connected subgraph of any desired size and of density comparable to the input graph, where the density of a graph is the ratio of its cost to the number of vertices it contains. This pruning algorithm is simple and efficient, and is likely to find additional applications. Recent breakthroughs on vertex-connectivity have made use of algorithms for element-connectivity problems. We develop an algorithm that, given a graph with some vertices marked as terminals, significantly simplifies the graph while preserving the pairwise element-connectivity of all terminals; in fact, the resulting graph is bipartite. We believe that our simplification/reduction algorithm will be a useful tool in many settings. We illustrate its applicability by giving algorithms to find many trees that each span a given terminal set, while being disjoint on edges and non-terminal vertices; such problems have applications in VLSI design and other areas. We also use this reduction algorithm to analyze simple algorithms for single-sink network design problems with high vertex-connectivity requirements; we give an O(k log n)-approximation for the problem of k-connecting a given set of terminals to a common sink. We study similar problems in which different types of links, of varying capacities and costs, can be used to connect nodes; assuming there are economies of scale, we give algorithms to construct low-cost networks with sufficient capacity or bandwidth to simultaneously support flow from each terminal to the common sink along many vertex-disjoint paths. We further investigate capacitated network design, where edges may have arbitrary costs and capacities. Given a connectivity requirement R_uv for each pair of vertices u,v, the goal is to find a low-cost network which, for each uv, can support a flow of R_uv units of traffic between u and v. We study several special cases of this problem, giving both algorithmic and hardness results. In addition to Network Design, we consider certain Traveling Salesperson-like problems, where the goal is to find short walks that visit many distinct vertices. We give a (2 + epsilon)-approximation for Orienteering in undirected graphs, achieving the best known approximation ratio, and the first approximation algorithm for Orienteering in directed graphs. We also give improved algorithms for Orienteering with time windows, in which vertices must be visited between specified release times and deadlines, and other related problems. These problems are motivated by applications in the fields of vehicle routing, delivery and transportation of goods, and robot path planning.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Chemotaxis, the phenomenon in which cells move in response to extracellular chemical gradients, plays a prominent role in the mammalian immune response. During this process, a number of chemical signals, called chemoattractants, are produced at or proximal to sites of infection and diffuse into the surrounding tissue. Immune cells sense these chemoattractants and move in the direction where their concentration is greatest, thereby locating the source of attractants and their associated targets. Leading the assault against new infections is a specialized class of leukocytes (white blood cells) known as neutrophils, which normally circulate in the bloodstream. Upon activation, these cells emigrate out of the vasculature and navigate through interstitial tissues toward target sites. There they phagocytose bacteria and release a number of proteases and reactive oxygen intermediates with antimicrobial activity. Neutrophils recruited by infected tissue in vivo are likely confronted by complex chemical environments consisting of a number of different chemoattractant species. These signals may include end target chemicals produced in the vicinity of the infectious agents, and endogenous chemicals released by local host tissues during the inflammatory response. To successfully locate their pathogenic targets within these chemically diverse and heterogeneous settings, activated neutrophils must be capable of distinguishing between the different signals and employing some sort of logic to prioritize among them. This ability to simultaneously process and interpret mulitple signals is thought to be essential for efficient navigation of the cells to target areas. In particular, aberrant cell signaling and defects in this functionality are known to contribute to medical conditions such as chronic inflammation, asthma and rheumatoid arthritis. To elucidate the biomolecular mechanisms underlying the neutrophil response to different chemoattractants, a number of efforts have been made toward understanding how cells respond to different combinations of chemicals. Most notably, recent investigations have shown that in the presence of both end target and endogenous chemoattractant variants, the cells migrate preferentially toward the former type, even in very low relative concentrations of the latter. Interestingly, however, when the cells are exposed to two different endogenous chemical species, they exhibit a combinatorial response in which distant sources are favored over proximal sources. Some additional results also suggest that cells located between two endogenous chemoattractant sources will respond to the vectorial sum of the combined gradients. In the long run, this peculiar behavior could result in oscillatory cell trajectories between the two sources. To further explore the significance of these and other observations, particularly in the context of physiological conditions, we introduce in this work a simplified phenomenological model of neutrophil chemotaxis. In particular, this model incorporates a trait commonly known as directional persistence - the tendency for migrating neutrophils to continue moving in the same direction (much like momentum) - while also accounting for the dose-response characteristics of cells to different chemical species. Simulations based on this model suggest that the efficiency of cell migration in complex chemical environments depends significantly on the degree of directional persistence. In particular, with appropriate values for this parameter, cells can improve their odds of locating end targets by drifting through a network of attractant sources in a loosely-guided fashion. This corroborates the prediction that neutrophils randomly migrate from one chemoattractant source to the next while searching for their end targets. These cells may thus use persistence as a general mechanism to avoid being trapped near sources of endogenous chemoattractants - the mathematical analogue of local maxima in a global optimization problem. Moreover, this general foraging strategy may apply to other biological processes involving multiple signals and long-range navigation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Datacenters have emerged as the dominant form of computing infrastructure over the last two decades. The tremendous increase in the requirements of data analysis has led to a proportional increase in power consumption and datacenters are now one of the fastest growing electricity consumers in the United States. Another rising concern is the loss of throughput due to network congestion. Scheduling models that do not explicitly account for data placement may lead to a transfer of large amounts of data over the network causing unacceptable delays. In this dissertation, we study different scheduling models that are inspired by the dual objectives of minimizing energy costs and network congestion in a datacenter. As datacenters are equipped to handle peak workloads, the average server utilization in most datacenters is very low. As a result, one can achieve huge energy savings by selectively shutting down machines when demand is low. In this dissertation, we introduce the network-aware machine activation problem to find a schedule that simultaneously minimizes the number of machines necessary and the congestion incurred in the network. Our model significantly generalizes well-studied combinatorial optimization problems such as hard-capacitated hypergraph covering and is thus strongly NP-hard. As a result, we focus on finding good approximation algorithms. Data-parallel computation frameworks such as MapReduce have popularized the design of applications that require a large amount of communication between different machines. Efficient scheduling of these communication demands is essential to guarantee efficient execution of the different applications. In the second part of the thesis, we study the approximability of the co-flow scheduling problem that has been recently introduced to capture these application-level demands. Finally, we also study the question, "In what order should one process jobs?'' Often, precedence constraints specify a partial order over the set of jobs and the objective is to find suitable schedules that satisfy the partial order. However, in the presence of hard deadline constraints, it may be impossible to find a schedule that satisfies all precedence constraints. In this thesis we formalize different variants of job scheduling with soft precedence constraints and conduct the first systematic study of these problems.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

International audience

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Human and robots have complementary strengths in performing assembly operations. Humans are very good at perception tasks in unstructured environments. They are able to recognize and locate a part from a box of miscellaneous parts. They are also very good at complex manipulation in tight spaces. The sensory characteristics of the humans, motor abilities, knowledge and skills give the humans the ability to react to unexpected situations and resolve problems quickly. In contrast, robots are very good at pick and place operations and highly repeatable in placement tasks. Robots can perform tasks at high speeds and still maintain precision in their operations. Robots can also operate for long periods of times. Robots are also very good at applying high forces and torques. Typically, robots are used in mass production. Small batch and custom production operations predominantly use manual labor. The high labor cost is making it difficult for small and medium manufacturers to remain cost competitive in high wage markets. These manufactures are mainly involved in small batch and custom production. They need to find a way to reduce the labor cost in assembly operations. Purely robotic cells will not be able to provide them the necessary flexibility. Creating hybrid cells where humans and robots can collaborate in close physical proximities is a potential solution. The underlying idea behind such cells is to decompose assembly operations into tasks such that humans and robots can collaborate by performing sub-tasks that are suitable for them. Realizing hybrid cells that enable effective human and robot collaboration is challenging. This dissertation addresses the following three computational issues involved in developing and utilizing hybrid assembly cells: - We should be able to automatically generate plans to operate hybrid assembly cells to ensure efficient cell operation. This requires generating feasible assembly sequences and instructions for robots and human operators, respectively. Automated planning poses the following two challenges. First, generating operation plans for complex assemblies is challenging. The complexity can come due to the combinatorial explosion caused by the size of the assembly or the complex paths needed to perform the assembly. Second, generating feasible plans requires accounting for robot and human motion constraints. The first objective of the dissertation is to develop the underlying computational foundations for automatically generating plans for the operation of hybrid cells. It addresses both assembly complexity and motion constraints issues. - The collaboration between humans and robots in the assembly cell will only be practical if human safety can be ensured during the assembly tasks that require collaboration between humans and robots. The second objective of the dissertation is to evaluate different options for real-time monitoring of the state of human operator with respect to the robot and develop strategies for taking appropriate measures to ensure human safety when the planned move by the robot may compromise the safety of the human operator. In order to be competitive in the market, the developed solution will have to include considerations about cost without significantly compromising quality. - In the envisioned hybrid cell, we will be relying on human operators to bring the part into the cell. If the human operator makes an error in selecting the part or fails to place it correctly, the robot will be unable to correctly perform the task assigned to it. If the error goes undetected, it can lead to a defective product and inefficiencies in the cell operation. The reason for human error can be either confusion due to poor quality instructions or human operator not paying adequate attention to the instructions. In order to ensure smooth and error-free operation of the cell, we will need to monitor the state of the assembly operations in the cell. The third objective of the dissertation is to identify and track parts in the cell and automatically generate instructions for taking corrective actions if a human operator deviates from the selected plan. Potential corrective actions may involve re-planning if it is possible to continue assembly from the current state. Corrective actions may also involve issuing warning and generating instructions to undo the current task.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A Bayesian optimisation algorithm for a nurse scheduling problem is presented, which involves choosing a suitable scheduling rule from a set for each nurse's assignment. When a human scheduler works, he normally builds a schedule systematically following a set of rules. After much practice, the scheduler gradually masters the knowledge of which solution parts go well with others. He can identify good parts and is aware of the solution quality even if the scheduling process is not yet completed, thus having the ability to finish a schedule by using flexible, rather than fixed, rules. In this paper, we design a more human-like scheduling algorithm, by using a Bayesian optimisation algorithm to implement explicit learning from past solutions. A nurse scheduling problem from a UK hospital is used for testing. Unlike our previous work that used Genetic Algorithms to implement implicit learning [1], the learning in the proposed algorithm is explicit, i.e. we identify and mix building blocks directly. The Bayesian optimisation algorithm is applied to implement such explicit learning by building a Bayesian network of the joint distribution of solutions. The conditional probability of each variable in the network is computed according to an initial set of promising solutions. Subsequently, each new instance for each variable is generated by using the corresponding conditional probabilities, until all variables have been generated, i.e. in our case, new rule strings have been obtained. Sets of rule strings are generated in this way, some of which will replace previous strings based on fitness. If stopping conditions are not met, the conditional probabilities for all nodes in the Bayesian network are updated again using the current set of promising rule strings. For clarity, consider the following toy example of scheduling five nurses with two rules (1: random allocation, 2: allocate nurse to low-cost shifts). In the beginning of the search, the probabilities of choosing rule 1 or 2 for each nurse is equal, i.e. 50%. After a few iterations, due to the selection pressure and reinforcement learning, we experience two solution pathways: Because pure low-cost or random allocation produces low quality solutions, either rule 1 is used for the first 2-3 nurses and rule 2 on remainder or vice versa. In essence, Bayesian network learns 'use rule 2 after 2-3x using rule 1' or vice versa. It should be noted that for our and most other scheduling problems, the structure of the network model is known and all variables are fully observed. In this case, the goal of learning is to find the rule values that maximize the likelihood of the training data. Thus, learning can amount to 'counting' in the case of multinomial distributions. For our problem, we use our rules: Random, Cheapest Cost, Best Cover and Balance of Cost and Cover. In more detail, the steps of our Bayesian optimisation algorithm for nurse scheduling are: 1. Set t = 0, and generate an initial population P(0) at random; 2. Use roulette-wheel selection to choose a set of promising rule strings S(t) from P(t); 3. Compute conditional probabilities of each node according to this set of promising solutions; 4. Assign each nurse using roulette-wheel selection based on the rules' conditional probabilities. A set of new rule strings O(t) will be generated in this way; 5. Create a new population P(t+1) by replacing some rule strings from P(t) with O(t), and set t = t+1; 6. If the termination conditions are not met (we use 2000 generations), go to step 2. Computational results from 52 real data instances demonstrate the success of this approach. They also suggest that the learning mechanism in the proposed approach might be suitable for other scheduling problems. Another direction for further research is to see if there is a good constructing sequence for individual data instances, given a fixed nurse scheduling order. If so, the good patterns could be recognized and then extracted as new domain knowledge. Thus, by using this extracted knowledge, we can assign specific rules to the corresponding nurses beforehand, and only schedule the remaining nurses with all available rules, making it possible to reduce the solution space. Acknowledgements The work was funded by the UK Government's major funding agency, Engineering and Physical Sciences Research Council (EPSRC), under grand GR/R92899/01. References [1] Aickelin U, "An Indirect Genetic Algorithm for Set Covering Problems", Journal of the Operational Research Society, 53(10): 1118-1126,

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Abstract Scheduling problems are generally NP-hard combinatorial problems, and a lot of research has been done to solve these problems heuristically. However, most of the previous approaches are problem-specific and research into the development of a general scheduling algorithm is still in its infancy. Mimicking the natural evolutionary process of the survival of the fittest, Genetic Algorithms (GAs) have attracted much attention in solving difficult scheduling problems in recent years. Some obstacles exist when using GAs: there is no canonical mechanism to deal with constraints, which are commonly met in most real-world scheduling problems, and small changes to a solution are difficult. To overcome both difficulties, indirect approaches have been presented (in [1] and [2]) for nurse scheduling and driver scheduling, where GAs are used by mapping the solution space, and separate decoding routines then build solutions to the original problem. In our previous indirect GAs, learning is implicit and is restricted to the efficient adjustment of weights for a set of rules that are used to construct schedules. The major limitation of those approaches is that they learn in a non-human way: like most existing construction algorithms, once the best weight combination is found, the rules used in the construction process are fixed at each iteration. However, normally a long sequence of moves is needed to construct a schedule and using fixed rules at each move is thus unreasonable and not coherent with human learning processes. When a human scheduler is working, he normally builds a schedule step by step following a set of rules. After much practice, the scheduler gradually masters the knowledge of which solution parts go well with others. He can identify good parts and is aware of the solution quality even if the scheduling process is not completed yet, thus having the ability to finish a schedule by using flexible, rather than fixed, rules. In this research we intend to design more human-like scheduling algorithms, by using ideas derived from Bayesian Optimization Algorithms (BOA) and Learning Classifier Systems (LCS) to implement explicit learning from past solutions. BOA can be applied to learn to identify good partial solutions and to complete them by building a Bayesian network of the joint distribution of solutions [3]. A Bayesian network is a directed acyclic graph with each node corresponding to one variable, and each variable corresponding to individual rule by which a schedule will be constructed step by step. The conditional probabilities are computed according to an initial set of promising solutions. Subsequently, each new instance for each node is generated by using the corresponding conditional probabilities, until values for all nodes have been generated. Another set of rule strings will be generated in this way, some of which will replace previous strings based on fitness selection. If stopping conditions are not met, the Bayesian network is updated again using the current set of good rule strings. The algorithm thereby tries to explicitly identify and mix promising building blocks. It should be noted that for most scheduling problems the structure of the network model is known and all the variables are fully observed. In this case, the goal of learning is to find the rule values that maximize the likelihood of the training data. Thus learning can amount to 'counting' in the case of multinomial distributions. In the LCS approach, each rule has its strength showing its current usefulness in the system, and this strength is constantly assessed [4]. To implement sophisticated learning based on previous solutions, an improved LCS-based algorithm is designed, which consists of the following three steps. The initialization step is to assign each rule at each stage a constant initial strength. Then rules are selected by using the Roulette Wheel strategy. The next step is to reinforce the strengths of the rules used in the previous solution, keeping the strength of unused rules unchanged. The selection step is to select fitter rules for the next generation. It is envisaged that the LCS part of the algorithm will be used as a hill climber to the BOA algorithm. This is exciting and ambitious research, which might provide the stepping-stone for a new class of scheduling algorithms. Data sets from nurse scheduling and mall problems will be used as test-beds. It is envisaged that once the concept has been proven successful, it will be implemented into general scheduling algorithms. It is also hoped that this research will give some preliminary answers about how to include human-like learning into scheduling algorithms and may therefore be of interest to researchers and practitioners in areas of scheduling and evolutionary computation. References 1. Aickelin, U. and Dowsland, K. (2003) 'Indirect Genetic Algorithm for a Nurse Scheduling Problem', Computer & Operational Research (in print). 2. Li, J. and Kwan, R.S.K. (2003), 'Fuzzy Genetic Algorithm for Driver Scheduling', European Journal of Operational Research 147(2): 334-344. 3. Pelikan, M., Goldberg, D. and Cantu-Paz, E. (1999) 'BOA: The Bayesian Optimization Algorithm', IlliGAL Report No 99003, University of Illinois. 4. Wilson, S. (1994) 'ZCS: A Zeroth-level Classifier System', Evolutionary Computation 2(1), pp 1-18.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Consider two graphs G and H. Let H^k[G] be the lexicographic product of H^k and G, where H^k is the lexicographic product of the graph H by itself k times. In this paper, we determine the spectrum of H^k[G]H and H^k when G and H are regular and the Laplacian spectrum of H^k[G] and H^k for G and H arbitrary. Particular emphasis is given to the least eigenvalue of the adjacency matrix in the case of lexicographic powers of regular graphs, and to the algebraic connectivity and the largest Laplacian eigenvalues in the case of lexicographic powers of arbitrary graphs. This approach allows the determination of the spectrum (in case of regular graphs) and Laplacian spectrum (for arbitrary graphs) of huge graphs. As an example, the spectrum of the lexicographic power of the Petersen graph with the googol number (that is, 10^100 ) of vertices is determined. The paper finishes with the extension of some well known spectral and combinatorial invariant properties of graphs to its lexicographic powers.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The object of this paper is to argue once again for the combinatorial account of possibility defended in earlier work (Armstrong, 1989, 1997). But there I failed fully to realise the dialectical advantages that accrue once one begins by assuming the hypothesis of logical atomism, the hypothesis that postulates simple particulars and simple universals (properties and relations) at the bottom of the world. Logical atomism is, I incline to think, no better than ‘speculative cosmology’ as opposed to ‘analytic ontology’, to use Donald Williams’ terminology (Williams, 1966, p.74). It is, however, not an implausible hypothesis given the current state of quantum physics. More important for our purposes here, the strictly combinatorial theory that flows rather naturally from the atomist metaphysics shows some promise of continuing to hold (perhaps with a little mutatis mutandis) in a world that is not an atomist world.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Porous polymer particles are used in an extraordinarily wide range of advanced and everyday applications, from combinatorial chemistry, solid-phase organic synthesis and polymer-supported reagents, to environmental analyses and the purification of drinking water. The installation and exploitation of functional chemical handles on the particles is often a prerequisite for their successful exploitation, irrespective of the application and the porous nature of the particles. New methodology for the chemical modification of macroreticular polymers is the primary focus of the work presented in this thesis. Porous polymer microspheres decorated with a diverse range of functional groups were synthesised by the post-polymerisation chemical modification of beaded polymers via olefin cross metathesis. The polymer microspheres were prepared by the precipitation polymerisation of divinylbenzene in porogenic (pore-forming) solvents; the olefin cross-metathesis (CM) functionalisation reactions exploited the pendent (polymer-bound) vinyl groups that were not consumed by polymerisation. Olefin CM reactions involving the pendent vinyl groups were performed in dichloromethane using second-generation Grubbs catalyst (Grubbs II), and a wide range of coupling partners used. The results obtained indicate that high quality, porous polymer microspheres synthesised by precipitation polymerisation in near-θ solvents can be functionalised by olefin CM under very mild conditions to install a diverse range of chemical functionalities into a common polydivinylbenzene precursor. Gel-type polymer microspheres were prepared by the precipitation copolymerisation reaction of divinylbenzene and allyl methacrylate in neat acetonitrile. The unreacted pendent vinyl groups that were not consumed by polymerisation were subjected to internal and external olefin metathesis-based hypercrosslinking reactions. Internal hypercrosslinking was carried out by using ring-closing metathesis (RCM) reactions in toluene using Grubbs II catalyst. Under these conditions, hypercrosslinked (HXL) polymers with specific surface areas around 500 m2g-1 were synthesised. External hypercrosslinking was attempted by using CM/RCM in the presence of a multivinyl coupling partner in toluene using second-generation Hoveyda-Grubbs catalyst. The results obtained indicate that no HXL polymers were obtained. However, during the development of this methodology, a new type of polymerisation was discovered with tetraallylorthosilicate as monomer.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Short sea shipping has several advantages over other means of transportation, recognized by EU members. The maritime transportation could be dealt like a combination of two well-known problems: the container stowage problem and routing planning problem. The integration of these two well-known problems results in a new problem CSSRP (Container stowage and ship routing problem) that is also an hard combinatorial optimization problem. The aim of this work is to solve the CSSRP using a mixed integer programming model. It is proved that regardless the complexity of this problem, optimal solutions could be achieved in a reduced computational time. For testing the mathematical model some problems based on real data were generated and a sensibility analysis was performed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Este estudo tem como principal objetivo compreender e analisar o modo como crianças de creche e jardim-de-infância resolvem problemas matemáticos e o que pode constranger a resolução. Em particular, procurei analisar a atividade matemática que as crianças desenvolvem quando se confrontam com problemas matemáticos e os desafios com que se deparam. Do ponto de vista metodológico, o estudo enquadra-se numa abordagem qualitativa de investigação e num paradigma interpretativo. Além disso, trata-se de uma investigação-ação orientada pela questão “como otimizar a atividade de resolver problemas matemáticos em contextos de educação de infância?”. Neste âmbito, propus a quatro crianças de creche e a 21 de jardim-de-infância um conjunto de tarefas selecionadas para, potencialmente, terem, para si, algum grau de desafio. Os principais métodos de recolha de dados foram a observação participante, a análise documental e um inquérito por questionário realizado às educadoras cooperantes. O estudo ilustra que é possível envolver crianças de creche e de jardim-de-infância numa atividade de resolução de problemas matemáticos e que esta atividade é favorecida se o contexto dos problemas estiver próximo do que fazem no dia-a-dia da sala. Durante o processo de resolução das tarefas propostas, foram mobilizadas e trabalhadas diversas noções matemáticas. Na creche, todas as crianças evidenciaram possuir conhecimentos acerca da noção topológica “dentro de” e “fora de” e algumas foram bem-sucedidas no uso do processo de classificação, tendo em conta um critério. Neste âmbito, recorreram a representações ativas. No jardim-de-infância, todas as crianças conseguiram fazer a contagem sincronizada das letras do seu nome, de indicar a quantidade de letras, o que indicia o conhecimento da noção de cardinal, e de representar esta quantidade recorrendo tanto a numerais como a representações icónicas. Além disso, foram capazes de interpretar uma tabela de modo a construir um gráfico com barras e de elaborar um pictograma, o que revela possuírem conhecimentos ao nível da literacia estatística. Por último, algumas crianças foram bem-sucedidas na descoberta de estratégias de resolução de problemas que lhes permitiram inventariar exaustivamente todas as possibilidades de resolução e contar, organizadamente, estas possibilidades. No decurso desta atividade surgiram tentativas de generalização, embora nem sempre corretas, sobressaindo o recurso a representações ativas nomeadamente à dramatização de situações. Quanto aos desafios com que se depararam destacam-se, no caso da creche, o uso correto do processo de classificação. No caso do jardim-de-infância, as crianças demonstraram dificuldades em distinguir a legenda do pictograma dos dados, em resolver um problema em que estava em jogo o sentido combinatório da multiplicação e em encontrar estratégias de generalização. O estudo indicia, ainda, que é essencial que o educador proponha tarefas diversificadas e desafiantes que, partindo sempre da curiosidade e interesse das crianças, lhes permitam trabalhar com ideias matemáticas importantes e representar adequadamente o conhecimento com que lidam.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

International audience

Relevância:

10.00% 10.00%

Publicador:

Resumo:

International audience