868 resultados para Lagrangian bounds in optimization problems


Relevância:

100.00% 100.00%

Publicador:

Resumo:

* This paper is partially supported by the National Science Fund of Bulgarian Ministry of Education and Science under contract № I–1401\2004 "Interactive Algorithms and Software Systems Supporting Multicriteria Decision Making."

Relevância:

100.00% 100.00%

Publicador:

Resumo:

ATM network optimization problems defined as combinatorial optimization problems are considered. Several approximate algorithms for solving such problems are developed. Results of their comparison by experiments on a set of problems with random input data are presented.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

* The research was supported by INTAS 00-397 and 00-626 Projects.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

2010 Mathematics Subject Classification: 97D40, 97M10, 97M40, 97N60, 97N80, 97R80

Relevância:

100.00% 100.00%

Publicador:

Resumo:

MSC 2010: 46F30, 46F10

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Report published in the Proceedings of the National Conference on "Education and Research in the Information Society", Plovdiv, May, 2016

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we focus on the design of bivariate EDAs for discrete optimization problems and propose a new approach named HSMIEC. While the current EDAs require much time in the statistical learning process as the relationships among the variables are too complicated, we employ the Selfish gene theory (SG) in this approach, as well as a Mutual Information and Entropy based Cluster (MIEC) model is also set to optimize the probability distribution of the virtual population. This model uses a hybrid sampling method by considering both the clustering accuracy and clustering diversity and an incremental learning and resample scheme is also set to optimize the parameters of the correlations of the variables. Compared with several benchmark problems, our experimental results demonstrate that HSMIEC often performs better than some other EDAs, such as BMDA, COMIT, MIMIC and ECGA. © 2009 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Over the past few decades, we have been enjoying tremendous benefits thanks to the revolutionary advancement of computing systems, driven mainly by the remarkable semiconductor technology scaling and the increasingly complicated processor architecture. However, the exponentially increased transistor density has directly led to exponentially increased power consumption and dramatically elevated system temperature, which not only adversely impacts the system's cost, performance and reliability, but also increases the leakage and thus the overall power consumption. Today, the power and thermal issues have posed enormous challenges and threaten to slow down the continuous evolvement of computer technology. Effective power/thermal-aware design techniques are urgently demanded, at all design abstraction levels, from the circuit-level, the logic-level, to the architectural-level and the system-level. ^ In this dissertation, we present our research efforts to employ real-time scheduling techniques to solve the resource-constrained power/thermal-aware, design-optimization problems. In our research, we developed a set of simple yet accurate system-level models to capture the processor's thermal dynamic as well as the interdependency of leakage power consumption, temperature, and supply voltage. Based on these models, we investigated the fundamental principles in power/thermal-aware scheduling, and developed real-time scheduling techniques targeting at a variety of design objectives, including peak temperature minimization, overall energy reduction, and performance maximization. ^ The novelty of this work is that we integrate the cutting-edge research on power and thermal at the circuit and architectural-level into a set of accurate yet simplified system-level models, and are able to conduct system-level analysis and design based on these models. The theoretical study in this work serves as a solid foundation for the guidance of the power/thermal-aware scheduling algorithms development in practical computing systems.^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Recent technological developments in the field of experimental quantum annealing have made prototypical annealing optimizers with hundreds of qubits commercially available. The experimental demonstration of a quantum speedup for optimization problems has since then become a coveted, albeit elusive goal. Recent studies have shown that the so far inconclusive results, regarding a quantum enhancement, may have been partly due to the benchmark problems used being unsuitable. In particular, these problems had inherently too simple a structure, allowing for both traditional resources and quantum annealers to solve them with no special efforts. The need therefore has arisen for the generation of harder benchmarks which would hopefully possess the discriminative power to separate classical scaling of performance with size from quantum. We introduce here a practical technique for the engineering of extremely hard spin-glass Ising-type problem instances that does not require "cherry picking" from large ensembles of randomly generated instances. We accomplish this by treating the generation of hard optimization problems itself as an optimization problem, for which we offer a heuristic algorithm that solves it. We demonstrate the genuine thermal hardness of our generated instances by examining them thermodynamically and analyzing their energy landscapes, as well as by testing the performance of various state-of-the-art algorithms on them. We argue that a proper characterization of the generated instances offers a practical, efficient way to properly benchmark experimental quantum annealers, as well as any other optimization algorithm.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The unprecedented and relentless growth in the electronics industry is feeding the demand for integrated circuits (ICs) with increasing functionality and performance at minimum cost and power consumption. As predicted by Moore's law, ICs are being aggressively scaled to meet this demand. While the continuous scaling of process technology is reducing gate delays, the performance of ICs is being increasingly dominated by interconnect delays. In an effort to improve submicrometer interconnect performance, to increase packing density, and to reduce chip area and power consumption, the semiconductor industry is focusing on three-dimensional (3D) integration. However, volume production and commercial exploitation of 3D integration are not feasible yet due to significant technical hurdles.

At the present time, interposer-based 2.5D integration is emerging as a precursor to stacked 3D integration. All the dies and the interposer in a 2.5D IC must be adequately tested for product qualification. However, since the structure of 2.5D ICs is different from the traditional 2D ICs, new challenges have emerged: (1) pre-bond interposer testing, (2) lack of test access, (3) limited ability for at-speed testing, (4) high density I/O ports and interconnects, (5) reduced number of test pins, and (6) high power consumption. This research targets the above challenges and effective solutions have been developed to test both dies and the interposer.

The dissertation first introduces the basic concepts of 3D ICs and 2.5D ICs. Prior work on testing of 2.5D ICs is studied. An efficient method is presented to locate defects in a passive interposer before stacking. The proposed test architecture uses e-fuses that can be programmed to connect or disconnect functional paths inside the interposer. The concept of a die footprint is utilized for interconnect testing, and the overall assembly and test flow is described. Moreover, the concept of weighted critical area is defined and utilized to reduce test time. In order to fully determine the location of each e-fuse and the order of functional interconnects in a test path, we also present a test-path design algorithm. The proposed algorithm can generate all test paths for interconnect testing.

In order to test for opens, shorts, and interconnect delay defects in the interposer, a test architecture is proposed that is fully compatible with the IEEE 1149.1 standard and relies on an enhancement of the standard test access port (TAP) controller. To reduce test cost, a test-path design and scheduling technique is also presented that minimizes a composite cost function based on test time and the design-for-test (DfT) overhead in terms of additional through silicon vias (TSVs) and micro-bumps needed for test access. The locations of the dies on the interposer are taken into consideration in order to determine the order of dies in a test path.

To address the scenario of high density of I/O ports and interconnects, an efficient built-in self-test (BIST) technique is presented that targets the dies and the interposer interconnects. The proposed BIST architecture can be enabled by the standard TAP controller in the IEEE 1149.1 standard. The area overhead introduced by this BIST architecture is negligible; it includes two simple BIST controllers, a linear-feedback-shift-register (LFSR), a multiple-input-signature-register (MISR), and some extensions to the boundary-scan cells in the dies on the interposer. With these extensions, all boundary-scan cells can be used for self-configuration and self-diagnosis during interconnect testing. To reduce the overall test cost, a test scheduling and optimization technique under power constraints is described.

In order to accomplish testing with a small number test pins, the dissertation presents two efficient ExTest scheduling strategies that implements interconnect testing between tiles inside an system on chip (SoC) die on the interposer while satisfying the practical constraint that the number of required test pins cannot exceed the number of available pins at the chip level. The tiles in the SoC are divided into groups based on the manner in which they are interconnected. In order to minimize the test time, two optimization solutions are introduced. The first solution minimizes the number of input test pins, and the second solution minimizes the number output test pins. In addition, two subgroup configuration methods are further proposed to generate subgroups inside each test group.

Finally, the dissertation presents a programmable method for shift-clock stagger assignment to reduce power supply noise during SoC die testing in 2.5D ICs. An SoC die in the 2.5D IC is typically composed of several blocks and two neighboring blocks that share the same power rails should not be toggled at the same time during shift. Therefore, the proposed programmable method does not assign the same stagger value to neighboring blocks. The positions of all blocks are first analyzed and the shared boundary length between blocks is then calculated. Based on the position relationships between the blocks, a mathematical model is presented to derive optimal result for small-to-medium sized problems. For larger designs, a heuristic algorithm is proposed and evaluated.

In summary, the dissertation targets important design and optimization problems related to testing of interposer-based 2.5D ICs. The proposed research has led to theoretical insights, experiment results, and a set of test and design-for-test methods to make testing effective and feasible from a cost perspective.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this dissertation, we study the behavior of exciton-polariton quasiparticles in semiconductor microcavities, under the sourceless and lossless conditions.

First, we simplify the original model by removing the photon dispersion term, thus effectively turn the PDEs system to an ODEs system,

and investigate the behavior of the resulting system, including the equilibrium points and the wave functions of the excitons and the photons.

Second, we add the dispersion term for the excitons to the original model and prove that the band of the discontinuous solitons now become dark solitons.

Third, we employ the Strang-splitting method to our sytem of PDEs and prove the first-order and second-order error bounds in the $H^1$ norm and the $L_2$ norm, respectively.

Using this numerical result, we analyze the stability of the steady state bright soliton solution. This solution revolves around the $x$-axis as time progresses

and the perturbed soliton also rotates around the $x$-axis and tracks closely in terms of amplitude but lags behind the exact one. Our numerical result shows orbital

stability but no $L_2$ stability.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The following paper is about the possible psychological effects of social circus, and our experiences with teaching circus methods in children psychiatry. In the beginning the paper try to place social circus in a wider theoretical frame, and searches for the place of it among psychological methods and therapies. We look at the wider and the more specific psychological constructs, what can be effected by social circus, especially the factors which are damaged in children with psycological or psychiatrycal problems. We examine the different parts of circus, how they can help in different problems. The further aim is to research the effects of a continuous social circus group, and to find it’s own way among psychotherapies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The use of the Design by Analysis (DBA) route is a modern trend in pressure vessel and piping international codes in mechanical engineering. However, to apply the DBA to structures under variable mechanical and thermal loads, it is necessary to assure that the plastic collapse modes, alternate plasticity and incremental collapse (with instantaneous plastic collapse as a particular case), be precluded. The tool available to achieve this target is the shakedown theory. Unfortunately, the practical numerical applications of the shakedown theory result in very large nonlinear optimization problems with nonlinear constraints. Precise, robust and efficient algorithms and finite elements to solve this problem in finite dimension has been a more recent achievements. However, to solve real problems in an industrial level, it is necessary also to consider more realistic material properties as well as to accomplish 3D analysis. Limited kinematic hardening, is a typical property of the usual steels and it should be considered in realistic applications. In this paper, a new finite element with internal thermodynamical variables to model kinematic hardening materials is developed and tested. This element is a mixed ten nodes tetrahedron and through an appropriate change of variables is possible to embed it in a shakedown analysis software developed by Zouain and co-workers for elastic ideally-plastic materials, and then use it to perform 3D shakedown analysis in cases with limited kinematic hardening materials

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In design or safety assessment of mechanical structures, the use of the Design by Analysis (DBA) route is a modern trend. However, for making possible to apply DBA to structures under variable loads, two basic failure modes considered by ASME or European Standards must be precluded. Those modes are the alternate plasticity and incremental collapse (with instantaneous plastic collapse as a particular case). Shakedown theory is a tool that permit us to assure that those kinds of failures will be avoided. However, in practical applications, very large nonlinear optimization problems are generated. Due to this facts, only in recent years have been possible to obtain algorithms sufficiently accurate, robust and efficient, for dealing with this class of problems. In this paper, one of these shakedown algorithms, developed for dealing with elastic ideally-plastic structures, is enhanced to include limited kinematic hardening, a more realistic material behavior. This is done in the continuous model by using internal thermodynamic variables. A corresponding discrete model is obtained using an axisymmetric mixed finite element with an internal variable. A thick wall sphere, under variable thermal and pressure loads, is used in an example to show the importance of considering the limited kinematic hardening in the shakedown calculations

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A gulf has tended to develop between the adoption and usage of information technology by different generations, at the heart of which is different ways of experiencing and relating to the world around us. This research idea is currently being developed following data collection and feedback is sought on ways forward to enable impact. The research focuses on information technology in the form of multimedia. Multimedia meaning ‘media’ and ‘content’ that uses a combination of different content forms; or electronically integrated communication engaging all or most of the senses (e.g. graphic art, sound, animation and full-motion video presented by way of computer or other electronic means) mainly through presentational technologies. Although multimedia is not new, some organization’s particularly those in the non-profit sector do not always have the technical or financial resources to support such systems and consequently may struggle to adopt and support its usage amongst different generations. However non-profit organizations are being forced to pay more attention to the way they communicate with markets and the public due to the professionalism of communication everywhere in society. The case study used for this study is a church circuit comprising of 15 churches in the Midlands region of the United Kingdom which was selected due to the diverse age groups catered for within this type of non-profit organization. Participants in the study also had a range of skills, experiences and backgrounds which adds to the diversity of the population studied. Data gathered focused on the attitudes and opinions of the adoption and use of multimedia amongst different age groups. 395 questionnaires were distributed, comprising of 11 opinion questions and 4 demographic questions. 83% of the questionnaires were returned, representing 35% of the total circuit membership. Three people from each of the following age categories were also interviewed: 1920 – 1946 (Matures); 1947-1964 (Baby Boomers); 1965-1982 (Generation X); 1983-2004 (Net Generation). Results of the questionnaire and comments from the interviews were found not to tally with the widespread assumption that the younger generation is attracted by the use of multimedia in comparison to the older generation. The highest proportion of those who said that they gain more from a service enhanced by multimedia was from the Baby Boomers. Comments from interviews suggested that: ‘we need to embrace multimedia if we are to attract and retain the younger generation’; ‘multimedia often helps children to remain focused and clarifies the objective of the service’. However, because the younger generations’ world tends to be dominated by computer technology the questionnaire showed that they are more likely to have higher standards when it comes to the use of multimedia, such as identifying higher levels of equipment failing to work and annoying use of sounds compared to older age groups. In comparison problems experienced with multimedia for the Matures age group had the highest percentage of difficulty with the size of letters; the colour of letters and background and the sound not loud enough which is to be expected. Since every organization is unique any type of multimedia adopted and used should be specific to their needs, its stakeholders and the physical building in order to enhance that uniqueness and its needs. Giving thought to whether the type of multimedia is the best method for communicating the message to the particular audience alongside how technical and financial resources are best used can assist in accommodating different age groups that need to be catered for.