919 resultados para Many-to-many-assignment problem
Resumo:
2010 Mathematics Subject Classification: 94A17.
Resumo:
Purpose: Considering the UK's limited capacity for waste disposal (particularly for hazardous/radiological waste) there is growing focus on waste avoidance and minimisation to lower the volumes of waste being sent to disposal. The hazardous nature of some waste can complicate its management and reduction. To address this problem there was a need for a decision making methodology to support managers in the nuclear industry as they identify ways to reduce the production of avoidable hazardous waste. The methodology we developed is called Waste And Sourcematter Analysis (WASAN). A methodology that begins the thought process at the pre-waste creation stage (i.e. Avoid). Design/methodology/ approach: The methodology analyses the source of waste, the production of waste inside the facility, the knock on effects from up/downstream facilities on waste production, and the down-selection of waste minimisation actions/options. WASAN has been applied to case studies with licencees and this paper reports on one such case study - the management of plastic bags in Enriched Uranium Residues Recovery Plant (EURRP) at Springfields (UK) where it was used to analyse the generation of radioactive plastic bag waste. Findings: Plastic bags are used in EURRP as a strategy to contain hazard. Double bagging of materials led to the proliferation of these bags as a waste. The paper reports on the philosophy behind WASAN, the application of the methodology to this problem, the results, and views from managers in EURRP. Originality/value: This paper presents WASAN as a novel methodology for analyzing the minimization of avoidable hazardous waste. This addresses an issue that is important to many industries e.g. where legislation enforces waste minimization, where waste disposal costs encourage waste avoidance, or where plant design can reduce waste. The paper forms part of the HSE Nuclear Installations Inspectorate's desire to work towards greater openness and transparency in its work and the development in its thinking.© Crown Copyright 2011.
Resumo:
The extreme sensitivity of the mass of the Higgs boson to quantum corrections from high mass states, makes it 'unnaturally' light in the standard model. This 'hierarchy problem' can be solved by symmetries, which predict new particles related, by the symmetry, to standard model fields. The Large Hadron Collider (LHC) can potentially discover these new particles, thereby finding the solution to the hierarchy problem. However, the dynamics of the Higgs boson is also sensitive to this new physics. We show that in many scenarios the Higgs can be a complementary and powerful probe of the hierarchy problem at the LHC and future colliders. If the top quark partners carry the color charge of the strong nuclear force, the production of Higgs pairs is affected. This effect is tightly correlated with single Higgs production, implying that only modest enhancements in di-Higgs production occur when the top partners are heavy. However, if the top partners are light, we show that di-Higgs production is a useful complementary probe to single Higgs production. We verify this result in the context of a simplified supersymmetric model. If the top partners do not carry color charge, their direct production is greatly reduced. Nevertheless, we show that such scenarios can be revealed through Higgs dynamics. We find that many color neutral frameworks leave observable traces in Higgs couplings, which, in some cases, may be the only way to probe these theories at the LHC. Some realizations of the color neutral framework also lead to exotic decays of the Higgs with displaced vertices. We show that these decays are so striking that the projected sensitivity for these searches, at hadron colliders, is comparable to that of searches for colored top partners. Taken together, these three case studies show the efficacy of the Higgs as a probe of naturalness.
Resumo:
Dynamically reconfigurable hardware is a promising technology that combines in the same device both the high performance and the flexibility that many recent applications demand. However, one of its main drawbacks is the reconfiguration overhead, which involves important delays in the task execution, usually in the order of hundreds of milliseconds, as well as high energy consumption. One of the most powerful ways to tackle this problem is configuration reuse, since reusing a task does not involve any reconfiguration overhead. In this paper we propose a configuration replacement policy for reconfigurable systems that maximizes task reuse in highly dynamic environments. We have integrated this policy in an external taskgraph execution manager that applies task prefetch by loading and executing the tasks as soon as possible (ASAP). However, we have also modified this ASAP technique in order to make the replacements more flexible, by taking into account the mobility of the tasks and delaying some of the reconfigurations. In addition, this replacement policy is a hybrid design-time/run-time approach, which performs the bulk of the computations at design time in order to save run-time computations. Our results illustrate that the proposed strategy outperforms other state-ofthe-art replacement policies in terms of reuse rates and achieves near-optimal reconfiguration overhead reductions. In addition, by performing the bulk of the computations at design time, we reduce the execution time of the replacement technique by 10 times with respect to an equivalent purely run-time one.
Resumo:
Universidade Estadual de Campinas . Faculdade de Educação Física
Resumo:
OBJETIVO: Verificar em dois grupos de pacientes com visão monocular (grupo 1) e com visão binocular (grupo 2), a serem submetidos à cirurgia de catarata num hospital universitário, opiniões em relação ao problema ocular, à qualidade da visão e à cirurgia de catarata. MÉTODOS: Foi realizado estudo transversal e comparativo, de forma consecutiva, por meio de questionário estruturado, aplicado por entrevista a pacientes, elaborado a partir de estudo exploratório e medidas acuidade visual e causa da perda visual. RESULTADOS: A amostra foi constituída por 96 indivíduos do grupo 1 (50,0% homens; 50,0% mulheres, com idade entre 41 e 91 anos, média 69,3 anos ± 10,4 anos) e 110, do grupo 2 (40,9% homens; 59,1% mulheres, com idade entre 40 e 89 anos, média 68,2 anos ± 10,2 anos). A maioria dos indivíduos de ambos os grupos apresentava baixa escolaridade. Não houve diferença estatisticamente significante entre os grupos em relação ao sexo (p=0,191), à idade (p=0,702) e à escolaridade (p=0,245). Não exerciam atividade laboral 95,8% dos indivíduos do grupo 1 e 83,6%, do grupo 2 (p=0,005) e 30,4% do grupo 1 mencionaram não ter possibilidade de trabalhar por causa da deficiência visual. Observou-se acuidade visual do olho a ser operado menor que 0,05 em 40,6% (grupo 1) e 33,6% (grupo 2), entre 0,25 e 0,05. Quase a totalidade dos indivíduos de ambos os grupos afirmou ter dificuldade para realização das atividades de vida diária e qualificou como insuficiente a respectiva acuidade visual; 71,9% dos entrevistados do grupo 1 e 71,6%, do grupo 2 mencionaram saber a causa da visão fraca; desses, 87,1% do grupo 1 e 83,3% do grupo 2 referiram a catarata como causa da baixa acuidade visual. CONCLUSÃO: Os indivíduos de ambos os grupos tiveram acesso à cirurgia de catarata com acuidade visual menor do que a idealmente indicada; os pacientes com visão monocular apresentaram acuidade visual significativamente menor em relação aos com visão binocular; a maioria dos entrevistados de ambos os grupos referiu dificuldades para realizar atividades cotidianas como consequência da baixa visão; muitos indivíduos de ambos os grupos desconheciam a causa da dificuldade visual ou a atribuíram a outra causa que não a catarata.
Resumo:
Efficient automatic protein classification is of central importance in genomic annotation. As an independent way to check the reliability of the classification, we propose a statistical approach to test if two sets of protein domain sequences coming from two families of the Pfam database are significantly different. We model protein sequences as realizations of Variable Length Markov Chains (VLMC) and we use the context trees as a signature of each protein family. Our approach is based on a Kolmogorov-Smirnov-type goodness-of-fit test proposed by Balding et at. [Limit theorems for sequences of random trees (2008), DOI: 10.1007/s11749-008-0092-z]. The test statistic is a supremum over the space of trees of a function of the two samples; its computation grows, in principle, exponentially fast with the maximal number of nodes of the potential trees. We show how to transform this problem into a max-flow over a related graph which can be solved using a Ford-Fulkerson algorithm in polynomial time on that number. We apply the test to 10 randomly chosen protein domain families from the seed of Pfam-A database (high quality, manually curated families). The test shows that the distributions of context trees coming from different families are significantly different. We emphasize that this is a novel mathematical approach to validate the automatic clustering of sequences in any context. We also study the performance of the test via simulations on Galton-Watson related processes.
Resumo:
The first problem of the Seleucid mathematical cuneiform tablet BM 34 568 calculates the diagonal of a rectangle from its sides without resorting to the Pythagorean rule. For this reason, it has been a source of discussion among specialists ever since its first publication. but so far no consensus in relation to its mathematical meaning has been attained. This paper presents two new interpretations of the scribe`s procedure. based on the assumption that he was able to reduce the problem to a standard Mesopotamian question about reciprocal numbers. These new interpretations are then linked to interpretations of the Old Babylonian tablet Plimpton 322 and to the presence of Pythagorean triples in the contexts of Old Babylonian and Hellenistic mathematics. (C) 2007 Elsevier Inc. All rights reserved.
Resumo:
Real-time viscosity measurement remains a necessity for highly automated industry. To resolve this problem, many studies have been carried out using an ultrasonic shear wave reflectance method. This method is based on the determination of the complex reflection coefficient`s magnitude and phase at the solid-liquid interface. Although magnitude is a stable quantity and its measurement is relatively simple and precise, phase measurement is a difficult task because of strong temperature dependence. A simplified method that uses only the magnitude of the reflection coefficient and that is valid under the Newtonian regimen has been proposed by some authors, but the obtained viscosity values do not match conventional viscometry measurements. In this work, a mode conversion measurement cell was used to measure glycerin viscosity as a function of temperature (15 to 25 degrees C) and corn syrup-water mixtures as a function of concentration (70 to 100 wt% of corn syrup). Tests were carried out at 1 MHz. A novel signal processing technique that calculates the reflection coefficient magnitude in a frequency band, instead of a single frequency, was studied. The effects of the bandwidth on magnitude and viscosity were analyzed and the results were compared with the values predicted by the Newtonian liquid model. The frequency band technique improved the magnitude results. The obtained viscosity values came close to those measured by the rotational viscometer with percentage errors up to 14%, whereas errors up to 96% were found for the single frequency method.
Resumo:
It has been suggested that phased atomic decay in a squeezed vacuum could be detected in the fluorescence spectrum emitted from a driven two-level atom in a cavity. Recently, the existence of other very distinctive features in the fluorescence spectra arising from the nonclassical features of the squeezed vacuum has been reported. In this paper, we investigate the possibility of experimental observation of these spectra. The main obstacle to the experimentalist is ensuring an effective squeezed-vacuum-atom coupling. To overcome this problem we propose the use of a Fabry-Perot microcavity. The analysis involves a consideration of the three-dimensional nature of the electromagnetic held, and the possibility of a mismatch between the squeezed and cavity modes. The problem of squeezing bandwidths is also addressed. We show that under experimentally realistic circumstances many of the spectral anomalies predicted in free space also occur in this environment. In addition, we report large population inversions in the dressed states of the two-level atom. [S1050-2947(98)02301-4].
Resumo:
This is the first in a series of three articles which aimed to derive the matrix elements of the U(2n) generators in a multishell spin-orbit basis. This is a basis appropriate to many-electron systems which have a natural partitioning of the orbital space and where also spin-dependent terms are included in the Hamiltonian. The method is based on a new spin-dependent unitary group approach to the many-electron correlation problem due to Gould and Paldus [M. D. Gould and J. Paldus, J. Chem. Phys. 92, 7394, (1990)]. In this approach, the matrix elements of the U(2n) generators in the U(n) x U(2)-adapted electronic Gelfand basis are determined by the matrix elements of a single Ll(n) adjoint tensor operator called the del-operator, denoted by Delta(j)(i) (1 less than or equal to i, j less than or equal to n). Delta or del is a polynomial of degree two in the U(n) matrix E = [E-j(i)]. The approach of Gould and Paldus is based on the transformation properties of the U(2n) generators as an adjoint tensor operator of U(n) x U(2) and application of the Wigner-Eckart theorem. Hence, to generalize this approach, we need to obtain formulas for the complete set of adjoint coupling coefficients for the two-shell composite Gelfand-Paldus basis. The nonzero shift coefficients are uniquely determined and may he evaluated by the methods of Gould et al. [see the above reference]. In this article, we define zero-shift adjoint coupling coefficients for the two-shell composite Gelfand-Paldus basis which are appropriate to the many-electron problem. By definition, these are proportional to the corresponding two-shell del-operator matrix elements, and it is shown that the Racah factorization lemma applies. Formulas for these coefficients are then obtained by application of the Racah factorization lemma. The zero-shift adjoint reduced Wigner coefficients required for this procedure are evaluated first. All these coefficients are needed later for the multishell case, which leads directly to the two-shell del-operator matrix elements. Finally, we discuss an application to charge and spin densities in a two-shell molecular system. (C) 1998 John Wiley & Sons.
Resumo:
This is the second in a series of articles whose ultimate goal is the evaluation of the matrix elements (MEs) of the U(2n) generators in a multishell spin-orbit basis. This extends the existing unitary group approach to spin-dependent configuration interaction (CI) and many-body perturbation theory calculations on molecules to systems where there is a natural partitioning of the electronic orbital space. As a necessary preliminary to obtaining the U(2n) generator MEs in a multishell spin-orbit basis, we must obtain a complete set of adjoint coupling coefficients for the two-shell composite Gelfand-Paldus basis. The zero-shift coefficients were obtained in the first article of the series. in this article, we evaluate the nonzero shift adjoint coupling coefficients for the two-shell composite Gelfand-Paldus basis. We then demonstrate that the one-shell versions of these coefficients may be obtained by taking the Gelfand-Tsetlin limit of the two-shell formulas. These coefficients,together with the zero-shift types, then enable us to write down formulas for the U(2n) generator matrix elements in a two-shell spin-orbit basis. Ultimately, the results of the series may be used to determine the many-electron density matrices for a partitioned system. (C) 1998 John Wiley & Sons, Inc.
Resumo:
This is the third and final article in a series directed toward the evaluation of the U(2n) generator matrix elements (MEs) in a multishell spin/orbit basis. Such a basis is required for many-electron systems possessing a partitioned orbital space and where spin-dependence is important. The approach taken is based on the transformation properties of the U(2n) generators as an adjoint tensor operator of U(n) x U(2) and application of the Wigner-Eckart theorem. A complete set of adjoint coupling coefficients for the two-shell composite Gelfand-Paldus basis (which is appropriate to the many-electron problem) were obtained in the first and second articles of this series. Ln the first article we defined zero-shift coupling coefficients. These are proportional to the corresponding two-shell del-operator matrix elements. See P. J. Burton and and M. D. Gould, J. Chem. Phys., 104, 5112 (1996), for a discussion of the del-operator and its properties. Ln the second article of the series, the nonzero shift coupling coefficients were derived. Having obtained all the necessary coefficients, we now apply the formalism developed above to obtain the U(2n) generator MEs in a multishell spin-orbit basis. The methods used are based on the work of Gould et al. (see the above reference). (C) 1998 John Wiley & Sons, Inc.
Resumo:
At the end of Word War II, Soviet occupation forces removed countless art objects from German soil. Some of them were returned during the 1950s, but most either disappeared for good or were stored away secretly in cellars of Soviet museums. The Cold War then covered the issue with silence. After the collapse of the Soviet Union, museums in St Petersburg and Moscow started to exhibit some of the relocated art for the first time in half a century. The unusual quality of the paintings-mostly impressionist masterpieces-not only attracted the attention of the international art community, but also triggered a diplomatic row between Russia and Germany. Both governments advanced moral and legal claims to ownership. To make things even more complicated, many of the paintings once belonged to private collectors, some of whom were Jews. Their descendants also entered the dispute. The basic premise of this article is that the political and ethical dimensions of relocated art can be understood most adequately by eschewing a single authorial standpoint. Various positions, sometimes incommensurable ones, are thus explored in an attempt to outline possibilities for an ethics of representation and a dialogical solution to the international problem that relocated art has become.
Resumo:
Common sense tells us that the future is an essential element in any strategy. In addition, there is a good deal of literature on scenario planning, which is an important tool in considering the future in terms of strategy. However, in many organizations there is serious resistance to the development of scenarios, and they are not broadly implemented by companies. But even organizations that do not rely heavily on the development of scenarios do, in fact, construct visions to guide their strategies. But it might be asked, what happens when this vision is not consistent with the future? To address this problem, the present article proposes a method for checking the content and consistency of an organization`s vision of the future, no matter how it was conceived. The proposed method is grounded on theoretical concepts from the field of future studies, which are described in this article. This study was motivated by the search for developing new ways of improving and using scenario techniques as a method for making strategic decisions. The method was then tested on a company in the field of information technology in order to check its operational feasibility. The test showed that the proposed method is, in fact, operationally feasible and was capable of analyzing the vision of the company being studied, indicating both its shortcomings and points of inconsistency. (C) 2007 Elsevier Ltd. All rights reserved.