909 resultados para Differential Inclusions with Constraints
Resumo:
Abstract Scheduling problems are generally NP-hard combinatorial problems, and a lot of research has been done to solve these problems heuristically. However, most of the previous approaches are problem-specific and research into the development of a general scheduling algorithm is still in its infancy. Mimicking the natural evolutionary process of the survival of the fittest, Genetic Algorithms (GAs) have attracted much attention in solving difficult scheduling problems in recent years. Some obstacles exist when using GAs: there is no canonical mechanism to deal with constraints, which are commonly met in most real-world scheduling problems, and small changes to a solution are difficult. To overcome both difficulties, indirect approaches have been presented (in [1] and [2]) for nurse scheduling and driver scheduling, where GAs are used by mapping the solution space, and separate decoding routines then build solutions to the original problem. In our previous indirect GAs, learning is implicit and is restricted to the efficient adjustment of weights for a set of rules that are used to construct schedules. The major limitation of those approaches is that they learn in a non-human way: like most existing construction algorithms, once the best weight combination is found, the rules used in the construction process are fixed at each iteration. However, normally a long sequence of moves is needed to construct a schedule and using fixed rules at each move is thus unreasonable and not coherent with human learning processes. When a human scheduler is working, he normally builds a schedule step by step following a set of rules. After much practice, the scheduler gradually masters the knowledge of which solution parts go well with others. He can identify good parts and is aware of the solution quality even if the scheduling process is not completed yet, thus having the ability to finish a schedule by using flexible, rather than fixed, rules. In this research we intend to design more human-like scheduling algorithms, by using ideas derived from Bayesian Optimization Algorithms (BOA) and Learning Classifier Systems (LCS) to implement explicit learning from past solutions. BOA can be applied to learn to identify good partial solutions and to complete them by building a Bayesian network of the joint distribution of solutions [3]. A Bayesian network is a directed acyclic graph with each node corresponding to one variable, and each variable corresponding to individual rule by which a schedule will be constructed step by step. The conditional probabilities are computed according to an initial set of promising solutions. Subsequently, each new instance for each node is generated by using the corresponding conditional probabilities, until values for all nodes have been generated. Another set of rule strings will be generated in this way, some of which will replace previous strings based on fitness selection. If stopping conditions are not met, the Bayesian network is updated again using the current set of good rule strings. The algorithm thereby tries to explicitly identify and mix promising building blocks. It should be noted that for most scheduling problems the structure of the network model is known and all the variables are fully observed. In this case, the goal of learning is to find the rule values that maximize the likelihood of the training data. Thus learning can amount to 'counting' in the case of multinomial distributions. In the LCS approach, each rule has its strength showing its current usefulness in the system, and this strength is constantly assessed [4]. To implement sophisticated learning based on previous solutions, an improved LCS-based algorithm is designed, which consists of the following three steps. The initialization step is to assign each rule at each stage a constant initial strength. Then rules are selected by using the Roulette Wheel strategy. The next step is to reinforce the strengths of the rules used in the previous solution, keeping the strength of unused rules unchanged. The selection step is to select fitter rules for the next generation. It is envisaged that the LCS part of the algorithm will be used as a hill climber to the BOA algorithm. This is exciting and ambitious research, which might provide the stepping-stone for a new class of scheduling algorithms. Data sets from nurse scheduling and mall problems will be used as test-beds. It is envisaged that once the concept has been proven successful, it will be implemented into general scheduling algorithms. It is also hoped that this research will give some preliminary answers about how to include human-like learning into scheduling algorithms and may therefore be of interest to researchers and practitioners in areas of scheduling and evolutionary computation. References 1. Aickelin, U. and Dowsland, K. (2003) 'Indirect Genetic Algorithm for a Nurse Scheduling Problem', Computer & Operational Research (in print). 2. Li, J. and Kwan, R.S.K. (2003), 'Fuzzy Genetic Algorithm for Driver Scheduling', European Journal of Operational Research 147(2): 334-344. 3. Pelikan, M., Goldberg, D. and Cantu-Paz, E. (1999) 'BOA: The Bayesian Optimization Algorithm', IlliGAL Report No 99003, University of Illinois. 4. Wilson, S. (1994) 'ZCS: A Zeroth-level Classifier System', Evolutionary Computation 2(1), pp 1-18.
Resumo:
We consider the a posteriori error analysis and hp-adaptation strategies for hp-version interior penalty discontinuous Galerkin methods for second-order partial differential equations with nonnegative characteristic form on anisotropically refined computational meshes with anisotropically enriched elemental polynomial degrees. In particular, we exploit duality based hp-error estimates for linear target functionals of the solution and design and implement the corresponding adaptive algorithms to ensure reliable and efficient control of the error in the prescribed functional to within a given tolerance. This involves exploiting both local isotropic and anisotropic mesh refinement and isotropic and anisotropic polynomial degree enrichment. The superiority of the proposed algorithm in comparison with standard hp-isotropic mesh refinement algorithms and an h-anisotropic/p-isotropic adaptive procedure is illustrated by a series of numerical experiments.
Resumo:
O diagnóstico da doença inflamatória pélvica é clínico, baseando-se numa combinação de sintomas e sinais, que incluem dor pélvica e febre. A referenciação ao médico radiologista ocorre na fase aguda da doença, quando é necessário excluir diagnósticos diferenciais (ginecológicos, gastrointestinais ou urinários) ou em doentes que tiveram um episódio agudo prévio, por vezes assintomático, que recorrem ao médico assistente por complicações, como dor pélvica crónica, gravidez ectópica e infertilidade. Neste contexto, é fundamental que o médico radiologista reconheça as manifestações radiológicas dos diferentes estádios da doença inflamatória pélvica, com especial ênfase para o abcesso tuboovárico, cujas características radiológicas colocam diagnóstico diferencial com carcinoma do ovário.
Resumo:
Certaines recherches ont investigué le traitement visuel de bas et de plus hauts niveaux chez des personnes neurotypiques et chez des personnes ayant un trouble du spectre de l’autisme (TSA). Cependant, l’interaction développementale entre chacun de ces niveaux du traitement visuel n’est toujours pas bien comprise. La présente thèse a donc deux objectifs principaux. Le premier objectif (Étude 1) est d’évaluer l’interaction développementale entre l’analyse visuelle de bas niveaux et de niveaux intermédiaires à travers différentes périodes développementales (âge scolaire, adolescence et âge adulte). Le second objectif (Étude 2) est d’évaluer la relation fonctionnelle entre le traitement visuel de bas niveaux et de niveaux intermédiaires chez des adolescents et des adultes ayant un TSA. Ces deux objectifs ont été évalué en utilisant les mêmes stimuli et procédures. Plus précisément, la sensibilité de formes circulaires complexes (Formes de Fréquences Radiales ou FFR), définies par de la luminance ou par de la texture, a été mesurée avec une procédure à choix forcés à deux alternatives. Les résultats de la première étude ont illustré que l’information locale des FFR sous-jacents aux processus visuels de niveaux intermédiaires, affecte différemment la sensibilité à travers des périodes développementales distinctes. Plus précisément, lorsque le contour est défini par de la luminance, la performance des enfants est plus faible comparativement à celle des adolescents et des adultes pour les FFR sollicitant la perception globale. Lorsque les FFR sont définies par la texture, la sensibilité des enfants est plus faible comparativement à celle des adolescents et des adultes pour les conditions locales et globales. Par conséquent, le type d’information locale, qui définit les éléments locaux de la forme globale, influence la période à laquelle la sensibilité visuelle atteint un niveau développemental similaire à celle identifiée chez les adultes. Il est possible qu’une faible intégration visuelle entre les mécanismes de bas et de niveaux intermédiaires explique la sensibilité réduite des FFR chez les enfants. Ceci peut être attribué à des connexions descendantes et horizontales immatures ainsi qu’au sous-développement de certaines aires cérébrales du système visuel. Les résultats de la deuxième étude ont démontré que la sensibilité visuelle en autisme est influencée par la manipulation de l’information locale. Plus précisément, en présence de luminance, la sensibilité est seulement affectée pour les conditions sollicitant un traitement local chez les personnes avec un TSA. Cependant, en présence de texture, la sensibilité est réduite pour le traitement visuel global et local. Ces résultats suggèrent que la perception de formes en autisme est reliée à l’efficacité à laquelle les éléments locaux (luminance versus texture) sont traités. Les connexions latérales et ascendantes / descendantes des aires visuelles primaires sont possiblement tributaires d’un déséquilibre entre les signaux excitateurs et inhibiteurs, influençant ainsi l’efficacité à laquelle l’information visuelle de luminance et de texture est traitée en autisme. Ces résultats supportent l’hypothèse selon laquelle les altérations de la perception visuelle de bas niveaux (local) sont à l’origine des atypies de plus hauts niveaux chez les personnes avec un TSA.
Resumo:
Uma das áreas de aplicação da optimização é a Engenharia Biomédica, pois a optimização intervém no estudo de próteses e implantes, na reconstrução tomográfica, na mecânica experimental, entre outras aplicações. Este projecto tem como principal objectivo a criação de um novo programa de marcação de exames médicos a fim de minimizar o tempo de espera na realização dos mesmos. É efectuada uma breve referência à teoria da optimização bem como à optimização linear e não-linear, aos algoritmos genéticos, que foram usados para a realização deste trabalho. É também apresentado um caso de estudo, formulado como um problema de optimização não linear com restrições. Com este estudo verificou-se que o escalonamento de exames médicos nunca poderá ser optimizado a 100por cento devido à quantidade de variáveis existentes, sendo que algumas delas não são passíveis de prever com antecedência.
Resumo:
As neoplasias malignas do pâncreas englobam vários tipos histologicos com características ima giológicas e comportamentos que permitem distingui-los entre si numa boa percentagem de situações. Contudo, nem sempre a sua diferenciação se toma possível sem o recurso às técnicas anátomo-patoló gicas, constituindo um grande desafio à imagiologia o diagnóstico diferencial com lesões benignas, com referencia particular às massas inflamatórias. O adenocarcinoma constitui cerca de 95% das neo plasias pancreáticas e é uma das grandes causas de morte por cancro nos países desenvolvidos. As técnicas de imagem aplicadas no diagnóstico e no estadiamento dos principais tumores do pâncreas são muito diversificadas, englobando meios inócuos, como a ecografia, métodos invasivos utilizando técnicas angiográficas ou recurso aos meios endoscópicos para acesso endocavitário. A utilidade de cada uma destas variedades técnicas depende essencialmente do tipo de tumor a estudar, tomando-se fundamental o correcto e o completo conhecimento das possibilidades e limitações de cada uma, com vista à aplicação racional dos meios na imagiologia do pâncreas.
Resumo:
Title of dissertation: MAGNETIC AND ACOUSTIC INVESTIGATIONS OF TURBULENT SPHERICAL COUETTE FLOW Matthew M. Adams, Doctor of Philosophy, 2016 Dissertation directed by: Professor Daniel Lathrop Department of Physics This dissertation describes experiments in spherical Couette devices, using both gas and liquid sodium. The experimental geometry is motivated by the Earth's outer core, the seat of the geodynamo, and consists of an outer spherical shell and an inner sphere, both of which can be rotated independently to drive a shear flow in the fluid lying between them. In the case of experiments with liquid sodium, we apply DC axial magnetic fields, with a dominant dipole or quadrupole component, to the system. We measure the magnetic field induced by the flow of liquid sodium using an external array of Hall effect magnetic field probes, as well as two probes inserted into the fluid volume. This gives information about possible velocity patterns present, and we extend previous work categorizing flow states, noting further information that can be extracted from the induced field measurements. The limitations due to a lack of direct velocity measurements prompted us to work on developing the technique of using acoustic modes to measure zonal flows. Using gas as the working fluid in our 60~cm diameter spherical Couette experiment, we identified acoustic modes of the container, and obtained excellent agreement with theoretical predictions. For the case of uniform rotation of the system, we compared the acoustic mode frequency splittings with theoretical predictions for solid body flow, and obtained excellent agreement. This gave us confidence in extending this work to the case of differential rotation, with a turbulent flow state. Using the measured splittings for this case, our colleagues performed an inversion to infer the pattern of zonal velocities within the flow, the first such inversion in a rotating laboratory experiment. This technique holds promise for use in liquid sodium experiments, for which zonal flow measurements have historically been challenging.
Resumo:
This dissertation consists of four studies examining two constructs related to time orientation in organizations: polychronicity and multitasking. The first study investigates the internal structure of polychronicity and its external correlates in a sample of undergraduate students (N = 732). Results converge to support a one-factor model and finds measures of polychronicity to be significantly related to extraversion, agreeableness, and openness to experience. The second study quantitatively reviews the existing research examining the relationship between polychronicity and the Big Five factors of personality. Results reveal a significant relationship between extraversion and openness to experience across studies. Studies three and four examine the usefulness of multitasking ability in the prediction of work related criteria using two organizational samples (N = 175 and 119, respectively). Multitasking ability demonstrated predictive validity, however the incremental validity over that of traditional predictors (i.e., cognitive ability and the Big Five factors of personality) was minimal. The relationships between multitasking ability, polychronicity, and other individual differences were also investigated. Polychronicity and multitasking ability proved to be distinct constructs demonstrating differential relationships with cognitive ability, personality, and performance. Results provided support for multitasking performance as a mediator in the relationship between multitasking ability and overall job performance. Additionally, polychronicity moderated the relationship between multitasking ability and both ratings of multitasking performance and overall job performance in Study four. Clarification of the factor structure of polychronicity and its correlates will facilitate future research in the time orientation literature. Results from two organizational samples point to work related measures of multitasking ability as a worthwhile tool for predicting the performance of job applicants.
Resumo:
Today, modern System-on-a-Chip (SoC) systems have grown rapidly due to the increased processing power, while maintaining the size of the hardware circuit. The number of transistors on a chip continues to increase, but current SoC designs may not be able to exploit the potential performance, especially with energy consumption and chip area becoming two major concerns. Traditional SoC designs usually separate software and hardware. Thus, the process of improving the system performance is a complicated task for both software and hardware designers. The aim of this research is to develop hardware acceleration workflow for software applications. Thus, system performance can be improved with constraints of energy consumption and on-chip resource costs. The characteristics of software applications can be identified by using profiling tools. Hardware acceleration can have significant performance improvement for highly mathematical calculations or repeated functions. The performance of SoC systems can then be improved, if the hardware acceleration method is used to accelerate the element that incurs performance overheads. The concepts mentioned in this study can be easily applied to a variety of sophisticated software applications. The contributions of SoC-based hardware acceleration in the hardware-software co-design platform include the following: (1) Software profiling methods are applied to H.264 Coder-Decoder (CODEC) core. The hotspot function of aimed application is identified by using critical attributes such as cycles per loop, loop rounds, etc. (2) Hardware acceleration method based on Field-Programmable Gate Array (FPGA) is used to resolve system bottlenecks and improve system performance. The identified hotspot function is then converted to a hardware accelerator and mapped onto the hardware platform. Two types of hardware acceleration methods – central bus design and co-processor design, are implemented for comparison in the proposed architecture. (3) System specifications, such as performance, energy consumption, and resource costs, are measured and analyzed. The trade-off of these three factors is compared and balanced. Different hardware accelerators are implemented and evaluated based on system requirements. 4) The system verification platform is designed based on Integrated Circuit (IC) workflow. Hardware optimization techniques are used for higher performance and less resource costs. Experimental results show that the proposed hardware acceleration workflow for software applications is an efficient technique. The system can reach 2.8X performance improvements and save 31.84% energy consumption by applying the Bus-IP design. The Co-processor design can have 7.9X performance and save 75.85% energy consumption.
Resumo:
Mid-ocean ridge basalt (MORB) samples from the East Pacific Rise (EPR 12 degrees 50'N) were analyzed for U-series isotopes and compositions of plagioclase-hosted melt inclusions. The Ra-226 and Th-230 excesses are negatively correlated; the Ra-226 excess is positively correlated with Mg# and Sm/Nd, and is negatively correlated with La/Sm and Fe-8; the Th-230 excess is positively correlated with Fe-8 and La/Sm and is negatively correlated with Mg# and Sm/Nd. Interpretation of these correlations is critical for understanding the magmatic process. There are two models (the dynamic model and the "two-porosity" model) for interpreting these correlations, however, some crucial parameters used in these models are not ascertained. We propose instead a model to explain the U-series isotopic compositions based on the control of melt density variation. For melting either peridotite or the "marble-cake" mantle, the FeOt content, Th-230 excess and La/Sm ratio increases and Sm/Nd decreases with increasing pressure. A deep melt will evolve to a higher density and lower Mg# than a shallow melt, the former corresponds to a long residence time, which lowers the Ra-226 excess significantly. This model is supported by the existence of low Ra-226 excesses and high Th-230 excesses in MORBs having a high Fe-8 content and high density. The positive correlation of Ra-226 excess and magma liquidus temperature implies that the shallow melt is cooled less than the deep melt due to its low density and short residence time. The correlations among Fe-8, Ti-8 and Ca-8/Al-8 in plagioclase-hosted melt inclusions further prove that MORBs are formed from melts having a negative correlation in melting depths and degrees. The negative correlation of Ra-226 excess vs. chemical diversity index (standard deviation of Fe-8, Ti-8 and Ca-8/Al-8) of the melt inclusions is in accordance with the influence of a density-controlled magma residence time. We conclude that the magma density variation exerts significant control on residence time and U-series isotopic compositions. (c) 2010 Elsevier B.V. All rights reserved.
Resumo:
Dynamic economic load dispatch (DELD) is one of the most important steps in power system operation. Various optimisation algorithms for solving the problem have been developed; however, due to the non-convex characteristics and large dimensionality of the problem, it is necessary to explore new methods to further improve the dispatch results and minimise the costs. This article proposes a hybrid differential evolution (DE) algorithm, namely clonal selection-based differential evolution (CSDE), to solve the problem. CSDE is an artificial intelligence technique that can be applied to complex optimisation problems which are for example nonlinear, large scale, non-convex and discontinuous. This hybrid algorithm combines the clonal selection algorithm (CSA) as the local search technique to update the best individual in the population, which enhances the diversity of the solutions and prevents premature convergence in DE. Furthermore, we investigate four mutation operations which are used in CSA as the hyper-mutation operations. Finally, an efficient solution repair method is designed for DELD to satisfy the complicated equality and inequality constraints of the power system to guarantee the feasibility of the solutions. Two benchmark power systems are used to evaluate the performance of the proposed method. The experimental results show that the proposed CSDE/best/1 approach significantly outperforms nine other variants of CSDE and DE, as well as most other published methods, in terms of the quality of the solution and the convergence characteristics.
Resumo:
We consider exchange economies with a continuum of agents and differential information about finitely many states of nature. It was proved in Einy, Moreno and Shitovitz (2001) that if we allow for free disposal in the market clearing (feasibility) constraints then an irreducible economy has a competitive (or Walrasian expectations) equilibrium, and moreover, the set of competitive equilibrium allocations coincides with the private core. However when feasibility is defined with free disposal, competitive equilibrium allocations may not be incentive compatible and contracts may not be enforceable (see e.g. Glycopantis, Muir and Yannelis (2002)). This is the main motivation for considering equilibrium solutions with exact feasibility. We first prove that the results in Einy et al. (2001) are still valid without freedisposal. Then we define an incentive compatibility property motivated by the issue of contracts’ execution and we prove that every Pareto optimal exact feasible allocation is incentive compatible, implying that contracts of competitive or core allocations are enforceable.
Resumo:
We consider exchange economies with a continuum of agents and differential information about finitely many states of nature. It was proved in Einy, Moreno and Shitovitz (2001) that if we allow for free disposal in the market clearing (feasibility) constraints then an irreducible economy has a competitive (or Walrasian expectations) equilibrium, and moreover, the set of competitive equilibrium allocations coincides with the private core. However when feasibility is defined with free disposal, competitive equilibrium allocations may not be incentive compatible and contracts may not be enforceable (see e.g. Glycopantis, Muir and Yannelis (2002)). This is the main motivation for considering equilibrium solutions with exact feasibility. We first prove that the results in Einy et al. (2001) are still valid without free-disposal. Then we define an incentive compatibility property motivated by the issue of contracts’ execution and we prove that every Pareto optimal exact feasible allocation is incentive compatible, implying that contracts of a competitive or core allocations are enforceable.
Resumo:
The Precambrian crystalline basement of southeast Brazil is affected by many Phanerozoic reactivations of shear zones that developed during the end of the Neoproterozoic in the Brasiliano orogeny. These reactivations with specific tectonic events, a multidisciplinary study was done, involving geology, paleostress, and structural analysis of faults, associated with apatite fission track methods along the northeastern border of the Parana basin in southeast Brazil.The results show that the study area consists of three main tectonic domains, which record different episodes of uplift and reactivation of faults. These faults were brittle in character and resulted in multiple generations of fault products as pseudotachylytes and ultracataclasites, foliated cataclasites and fault gouges.Based on geological evidence and fission track data, an uplift of basement rocks and related tectonic subsidence with consequent deposition in the Parana basin were modeled.The reactivations of the basement record successive uplift events during the Phanerozoic dated via corrected fission track ages, at 387 +/- 50 Ma (Ordovician); 193 +/- 19 Ma (Triassic); 142 +/- 18 Ma (Jurassic), 126 +/- 11 Ma (Early Cretaceous); 89 +/- 10 Ma (Late Cretaceous) and 69 +/- 10 Ma (Late Cretaceous). These results indicate differential uplift of tectonic domains of basement units, probably related to Parana basin subsidence. Six major sedimentary units (supersequences) that have been deposited with their bounding unconformities, seem to have a close relationship with the orogenic events during the evolution of southwestern Gondwana. (c) 2005 Elsevier Ltd. All rights reserved.
Resumo:
A well developed theoretical framework is available in which paleofluid properties, such as chemical composition and density, can be reconstructed from fluid inclusions in minerals that have undergone no ductile deformation. The present study extends this framework to encompass fluid inclusions hosted by quartz that has undergone weak ductile deformation following fluid entrapment. Recent experiments have shown that such deformation causes inclusions to become dismembered into clusters of irregularly shaped relict inclusions surrounded by planar arrays of tiny, new-formed (neonate) inclusions. Comparison of the experimental samples with a naturally sheared quartz vein from Grimsel Pass, Aar Massif, Central Alps, Switzerland, reveals striking similarities. This strong concordance justifies applying the experimentally derived rules of fluid inclusion behaviour to nature. Thus, planar arrays of dismembered inclusions defining cleavage planes in quartz may be taken as diagnostic of small amounts of intracrystalline strain. Deformed inclusions preserve their pre-deformation concentration ratios of gases to electrolytes, but their H2O contents typically have changed. Morphologically intact inclusions, in contrast, preserve the pre-deformation composition and density of their originally trapped fluid. The orientation of the maximum principal compressive stress (σ1σ1) at the time of shear deformation can be derived from the pole to the cleavage plane within which the dismembered inclusions are aligned. Finally, the density of neonate inclusions is commensurate with the pressure value of σ1σ1 at the temperature and time of deformation. This last rule offers a means to estimate magnitudes of shear stresses from fluid inclusion studies. Application of this new paleopiezometer approach to the Grimsel vein yields a differential stress (σ1–σ3σ1–σ3) of ∼300 MPa∼300 MPa at View the MathML source390±30°C during late Miocene NNW–SSE orogenic shortening and regional uplift of the Aar Massif. This differential stress resulted in strain-hardening of the quartz at very low total strain (<5%<5%) while nearby shear zones were accommodating significant displacements. Further implementation of these experimentally derived rules should provide new insight into processes of fluid–rock interaction in the ductile regime within the Earth's crust.