931 resultados para many-objective problems
Resumo:
Due to various advantages such as flexibility, scalability and updatability, software intensive systems are increasingly embedded in everyday life. The constantly growing number of functions executed by these systems requires a high level of performance from the underlying platform. The main approach to incrementing performance has been the increase of operating frequency of a chip. However, this has led to the problem of power dissipation, which has shifted the focus of research to parallel and distributed computing. Parallel many-core platforms can provide the required level of computational power along with low power consumption. On the one hand, this enables parallel execution of highly intensive applications. With their computational power, these platforms are likely to be used in various application domains: from home use electronics (e.g., video processing) to complex critical control systems. On the other hand, the utilization of the resources has to be efficient in terms of performance and power consumption. However, the high level of on-chip integration results in the increase of the probability of various faults and creation of hotspots leading to thermal problems. Additionally, radiation, which is frequent in space but becomes an issue also at the ground level, can cause transient faults. This can eventually induce a faulty execution of applications. Therefore, it is crucial to develop methods that enable efficient as well as resilient execution of applications. The main objective of the thesis is to propose an approach to design agentbased systems for many-core platforms in a rigorous manner. When designing such a system, we explore and integrate various dynamic reconfiguration mechanisms into agents functionality. The use of these mechanisms enhances resilience of the underlying platform whilst maintaining performance at an acceptable level. The design of the system proceeds according to a formal refinement approach which allows us to ensure correct behaviour of the system with respect to postulated properties. To enable analysis of the proposed system in terms of area overhead as well as performance, we explore an approach, where the developed rigorous models are transformed into a high-level implementation language. Specifically, we investigate methods for deriving fault-free implementations from these models into, e.g., a hardware description language, namely VHDL.
Resumo:
With the introduction of new observing systems based on asynoptic observations, the analysis problem has changed in character. In the near future we may expect that a considerable part of meteorological observations will be unevenly distributed in four dimensions, i.e. three dimensions in space and one in time. The term analysis, or objective analysis in meteorology, means the process of interpolating observed meteorological observations from unevenly distributed locations to a network of regularly spaced grid points. Necessitated by the requirement of numerical weather prediction models to solve the governing finite difference equations on such a grid lattice, the objective analysis is a three-dimensional (or mostly two-dimensional) interpolation technique. As a consequence of the structure of the conventional synoptic network with separated data-sparse and data-dense areas, four-dimensional analysis has in fact been intensively used for many years. Weather services have thus based their analysis not only on synoptic data at the time of the analysis and climatology, but also on the fields predicted from the previous observation hour and valid at the time of the analysis. The inclusion of the time dimension in objective analysis will be called four-dimensional data assimilation. From one point of view it seems possible to apply the conventional technique on the new data sources by simply reducing the time interval in the analysis-forecasting cycle. This could in fact be justified also for the conventional observations. We have a fairly good coverage of surface observations 8 times a day and several upper air stations are making radiosonde and radiowind observations 4 times a day. If we have a 3-hour step in the analysis-forecasting cycle instead of 12 hours, which is applied most often, we may without any difficulties treat all observations as synoptic. No observation would thus be more than 90 minutes off time and the observations even during strong transient motion would fall within a horizontal mesh of 500 km * 500 km.
Resumo:
In the late seventies, Megiddo proposed a way to use an algorithm for the problem of minimizing a linear function a(0) + a(1)x(1) + ... + a(n)x(n) subject to certain constraints to solve the problem of minimizing a rational function of the form (a(0) + a(1)x(1) + ... + a(n)x(n))/(b(0) + b(1)x(1) + ... + b(n)x(n)) subject to the same set of constraints, assuming that the denominator is always positive. Using a rather strong assumption, Hashizume et al. extended Megiddo`s result to include approximation algorithms. Their assumption essentially asks for the existence of good approximation algorithms for optimization problems with possibly negative coefficients in the (linear) objective function, which is rather unusual for most combinatorial problems. In this paper, we present an alternative extension of Megiddo`s result for approximations that avoids this issue and applies to a large class of optimization problems. Specifically, we show that, if there is an alpha-approximation for the problem of minimizing a nonnegative linear function subject to constraints satisfying a certain increasing property then there is an alpha-approximation (1 1/alpha-approximation) for the problem of minimizing (maximizing) a nonnegative rational function subject to the same constraints. Our framework applies to covering problems and network design problems, among others.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Distribution networks paradigm is changing currently requiring improved methodologies and tools for network analysis and planning. A relevant issue is analyzing the impact of the Distributed Generation penetration in passive networks considering different operation scenarios. Studying DG optimal siting and sizing the planner can identify the network behavior in presence of DG. Many approaches for the optimal DG allocation problem successfully used multi-objective optimization techniques. So this paper contributes to the fundamental stage of multi-objective optimization of finding the Pareto optimal solutions set. It is proposed the application of a Multi-objective Tabu Search and it was verified a better performance comparing to the NSGA-II method. © 2009 IEEE.
Resumo:
Includes bibliography
Resumo:
Today, health problems are likely to have a complex and multifactorial etiology, whereby psychosocial factors interact with behaviour and bodily responses. Women generally report more health problems than men. The present thesis concerns the development of women’s health from a subjective and objective perspective, as related to psychosocial living conditions and physiological stress responses. Both cross-sectional and longitudinal studies were carried out on a representative sample of women. Data analysis was based on a holistic person-oriented approach as well as a variable approach. In Study I, the women’s self-reported symptoms and diseases as well as self-rated general health status were compared to physician-rated health problems and ratings of the general health of the women, based on medical examinations. The findings showed that physicians rated twice as many women as having poor health compared to the ratings of the women themselves. Moreover, the symptom ”a sense of powerlessness” had the highest predictive power for self-rated general health. Study II investigated individual and structural stability in symptom profiles between adolescence and middle-age as related to pubertal timing. There was individual stability in symptom reporting for nearly thirty years, although the effect of pubertal timing on symptom reporting did not extend into middle-age. Study III explored the longitudinal and current influence of socioeconomic and psychosocial factors on women’s self-reported health. Contemporary factors such as job strain, low income, financial worries, and double exposure in terms of high job strain and heavy domestic responsibilities increased the risk for poor self-reported health in middle-aged women. In Study IV, the association between self-reported symptoms and physiological stress responses was investigated. Results revealed that higher levels of medically unexplained symptoms were related to higher levels of cortisol, cholesterol, and heart rate. The empirical findings are discussed in relation to existing models of stress and health, such as the demand-control model, the allostatic load model, the biopsychosocial model, and the multiple role hypothesis. It was concluded that women’s health problems could be reduced if their overall life circumstances were improved. The practical implications of this might include a redesign of the labour market giving women more influence and control over their lives, both at and away from work.
Resumo:
The use of linear programming in various areas has increased with the significant improvement of specialized solvers. Linear programs are used as such to model practical problems, or as subroutines in algorithms such as formal proofs or branch-and-cut frameworks. In many situations a certified answer is needed, for example the guarantee that the linear program is feasible or infeasible, or a provably safe bound on its objective value. Most of the available solvers work with floating-point arithmetic and are thus subject to its shortcomings such as rounding errors or underflow, therefore they can deliver incorrect answers. While adequate for some applications, this is unacceptable for critical applications like flight controlling or nuclear plant management due to the potential catastrophic consequences. We propose a method that gives a certified answer whether a linear program is feasible or infeasible, or returns unknown'. The advantage of our method is that it is reasonably fast and rarely answers unknown'. It works by computing a safe solution that is in some way the best possible in the relative interior of the feasible set. To certify the relative interior, we employ exact arithmetic, whose use is nevertheless limited in general to critical places, allowing us to rnremain computationally efficient. Moreover, when certain conditions are fulfilled, our method is able to deliver a provable bound on the objective value of the linear program. We test our algorithm on typical benchmark sets and obtain higher rates of success compared to previous approaches for this problem, while keeping the running times acceptably small. The computed objective value bounds are in most of the cases very close to the known exact objective values. We prove the usability of the method we developed by additionally employing a variant of it in a different scenario, namely to improve the results of a Satisfiability Modulo Theories solver. Our method is used as a black box in the nodes of a branch-and-bound tree to implement conflict learning based on the certificate of infeasibility for linear programs consisting of subsets of linear constraints. The generated conflict clauses are in general small and give good rnprospects for reducing the search space. Compared to other methods we obtain significant improvements in the running time, especially on the large instances.
Resumo:
In vielen Bereichen der industriellen Fertigung, wie zum Beispiel in der Automobilindustrie, wer- den digitale Versuchsmodelle (sog. digital mock-ups) eingesetzt, um die Entwicklung komplexer Maschinen m ̈oglichst gut durch Computersysteme unterstu ̈tzen zu k ̈onnen. Hierbei spielen Be- wegungsplanungsalgorithmen eine wichtige Rolle, um zu gew ̈ahrleisten, dass diese digitalen Pro- totypen auch kollisionsfrei zusammengesetzt werden k ̈onnen. In den letzten Jahrzehnten haben sich hier sampling-basierte Verfahren besonders bew ̈ahrt. Diese erzeugen eine große Anzahl von zuf ̈alligen Lagen fu ̈r das ein-/auszubauende Objekt und verwenden einen Kollisionserken- nungsmechanismus, um die einzelnen Lagen auf Gu ̈ltigkeit zu u ̈berpru ̈fen. Daher spielt die Kollisionserkennung eine wesentliche Rolle beim Design effizienter Bewegungsplanungsalgorith- men. Eine Schwierigkeit fu ̈r diese Klasse von Planern stellen sogenannte “narrow passages” dar, schmale Passagen also, die immer dort auftreten, wo die Bewegungsfreiheit der zu planenden Objekte stark eingeschr ̈ankt ist. An solchen Stellen kann es schwierig sein, eine ausreichende Anzahl von kollisionsfreien Samples zu finden. Es ist dann m ̈oglicherweise n ̈otig, ausgeklu ̈geltere Techniken einzusetzen, um eine gute Performance der Algorithmen zu erreichen.rnDie vorliegende Arbeit gliedert sich in zwei Teile: Im ersten Teil untersuchen wir parallele Kollisionserkennungsalgorithmen. Da wir auf eine Anwendung bei sampling-basierten Bewe- gungsplanern abzielen, w ̈ahlen wir hier eine Problemstellung, bei der wir stets die selben zwei Objekte, aber in einer großen Anzahl von unterschiedlichen Lagen auf Kollision testen. Wir im- plementieren und vergleichen verschiedene Verfahren, die auf Hu ̈llk ̈operhierarchien (BVHs) und hierarchische Grids als Beschleunigungsstrukturen zuru ̈ckgreifen. Alle beschriebenen Verfahren wurden auf mehreren CPU-Kernen parallelisiert. Daru ̈ber hinaus vergleichen wir verschiedene CUDA Kernels zur Durchfu ̈hrung BVH-basierter Kollisionstests auf der GPU. Neben einer un- terschiedlichen Verteilung der Arbeit auf die parallelen GPU Threads untersuchen wir hier die Auswirkung verschiedener Speicherzugriffsmuster auf die Performance der resultierenden Algo- rithmen. Weiter stellen wir eine Reihe von approximativen Kollisionstests vor, die auf den beschriebenen Verfahren basieren. Wenn eine geringere Genauigkeit der Tests tolerierbar ist, kann so eine weitere Verbesserung der Performance erzielt werden.rnIm zweiten Teil der Arbeit beschreiben wir einen von uns entworfenen parallelen, sampling- basierten Bewegungsplaner zur Behandlung hochkomplexer Probleme mit mehreren “narrow passages”. Das Verfahren arbeitet in zwei Phasen. Die grundlegende Idee ist hierbei, in der er- sten Planungsphase konzeptionell kleinere Fehler zuzulassen, um die Planungseffizienz zu erh ̈ohen und den resultierenden Pfad dann in einer zweiten Phase zu reparieren. Der hierzu in Phase I eingesetzte Planer basiert auf sogenannten Expansive Space Trees. Zus ̈atzlich haben wir den Planer mit einer Freidru ̈ckoperation ausgestattet, die es erlaubt, kleinere Kollisionen aufzul ̈osen und so die Effizienz in Bereichen mit eingeschr ̈ankter Bewegungsfreiheit zu erh ̈ohen. Optional erlaubt unsere Implementierung den Einsatz von approximativen Kollisionstests. Dies setzt die Genauigkeit der ersten Planungsphase weiter herab, fu ̈hrt aber auch zu einer weiteren Perfor- mancesteigerung. Die aus Phase I resultierenden Bewegungspfade sind dann unter Umst ̈anden nicht komplett kollisionsfrei. Um diese Pfade zu reparieren, haben wir einen neuartigen Pla- nungsalgorithmus entworfen, der lokal beschr ̈ankt auf eine kleine Umgebung um den bestehenden Pfad einen neuen, kollisionsfreien Bewegungspfad plant.rnWir haben den beschriebenen Algorithmus mit einer Klasse von neuen, schwierigen Metall- Puzzlen getestet, die zum Teil mehrere “narrow passages” aufweisen. Unseres Wissens nach ist eine Sammlung vergleichbar komplexer Benchmarks nicht ̈offentlich zug ̈anglich und wir fan- den auch keine Beschreibung von vergleichbar komplexen Benchmarks in der Motion-Planning Literatur.
Resumo:
Rumiana Stoilova (Bulgaria). Social Policy Facing the Problems of Youth Employment. Ms. Stoilova is a researcher in the Institute of Sociology in Sofia and worked on this project from October 1996 to September 1998. This project involved collecting both statistical and empirical data on the state of youth employment in Bulgaria, which was then compared with similar data from other European countries. One significant aspect was the parallel investigation of employment and unemployment, which took as a premise the continuity of professional experience where unemployment is just a temporary condition caused by external and internal factors. These need to be studied and changed on a systematic basis so as to create a more favourable market situation and to improve individuals' resources for improving their market opportunities. A second important aspect of the project was an analysis of the various entities active on the labour market, including government and private institutions, associations of unemployed persons, of employers or of trade unions, all with their specific legal powers and interests, and of the problems in communication between these. The major trends in youth unemployment during the period studied include a high proportion of the registered unemployed who are not eligible for social assistance, a lengthening of the average period of unemployment, an increase in the percentage of people who are unemployed for the first time and an increasing percentage of these who are not eligible for assistance, particularly among newly registered young people. At the same time the percentage of those for who work has been found is rising and during the last three years an increasing number of the unemployed have started some independent economic activity. Regional differences are also considerable and in the case of the Haskovo region represent a danger of losing the youngest generation, with resulting negative demographic effects. One major weakness of the existing institutional structure is the large scale of the black labour market, with clear negative implications for the young people drawn into it. The role of non-governmental organisations in providing support and information for the unemployed is growing and the government has recently introduced special preferences for organisations offering jobs to unemployed persons. Social policy in the labour market has however been largely restricted to passive measures, mostly because of the risk that poverty poses to people continuously excluded from the labour market. Among the active measures taken, well over half are concerned with providing jobs for the unemployed and there are very limited programmes for providing or improving qualifications. The nature of youth employment in Bulgaria can be seen in the influence of sustained structures (generation) and institutions (family and school). Ms. Stoilova studied the situation of the modern generation through a series of profiles, mostly those of continuously unemployed and self-employed persons, but also distinguishing between students and the unemployed, and between high school and university students. The different categories of young people were studied in separate mini-studies and the survey was carried out in five town in order to gather objective and subjective information on the state of the labour market in the different regions. She conducted interviews with several hundred young people covering questions of family background, career plans, attitudes to the labour situation and government measures to deal with it, and such questions as independence, mobility, attitude to work, etc. The interviews with young people unemployed for a long period of time show the risk involved in starting work and its link with dynamics of economic development. Their approval of structural reforms, of the financial restrictions connected with the introduction of a currency board and the inevitability of unemployment was largely declarative. The findings indicate that the continuously unemployed need practical knowledge and skills to "translate" the macroeconomic realities in concrete alternatives of individual work and initiative. The unemployed experience their exclusion from the labour market not only as a professional problem but also as an existential threat, of poverty, forced mobility and dependence on their parents' generation. The exclusion from the market of goods and services means more than just exercising restraint in their consumption, as it places restrictions on their personal development. Ms. Stoilova suggests that more efficient ways of providing financial aid and mobilisation are needed to counteract the social disintegration and marginalisation of the continuously unemployed. In measuring the speed of reform, university students took both employment opportunities and the implementation of the meritocratic principle in employment into account. When offered a hypothetical choice between a well-paid job and work in one's own profession, 62% would prefer opt for the well-paid job and for working for a company that offered career opportunities rather than employment in a family or own company. While most see the information gained during their studies as useful and interesting, relatively few see their education as competitive on a wider level and many were pessimistic about employment opportunities based on their qualifications. Very similar attitudes were found among high school students, with differences being due rather to family and personal situations. The unemployed, on the other hand, placed greater emphasis on possibilities of gaining or improving qualifications on a job and for the opportunities it would offer for personal contacts. High school students tend to attribute more significance to opportunities for personal accomplishment. A significant difference that five times fewer high school students were willing to work for state-owned companies, and many fewer expected to find permanent employment or to find a job in the area where they lived, Within the family situation, actual support for children seems to be higher than the feelings of confidence expressed in interviews. The attitudes of the families towards past experience seems to be linked with their ability to cope with the difficulties of the present, with those families which show an optimistic and active attitude towards the future having a greater respect for parents experience and tolerance in communication between parents and children.
Resumo:
In this paper I review the ways in which the glassy state is obtained both in nature and in materials science and highlight a "new twist"--the recent recognition of polymorphism within the glassy state. The formation of glass by continuous cooling (viscous slowdown) is then examined, the strong/fragile liquids classification is reviewed, and a new twist-the possibility that the slowdown is a result of an avoided critical point-is noted. The three canonical characteristics of relaxing liquids are correlated through the fragility. As a further new twist, the conversion of strong liquids to fragile liquids by pressure-induced coordination number increases is demonstrated. It is then shown that, for comparable systems, it is possible to have the same conversion accomplished via a first-order transition within the liquid state during quenching. This occurs in the systems in which "polyamorphism" (polymorphism in the glassy state) is observed, and the whole phenomenology is accounted for by Poole's bond-modified van der Waals model. The sudden loss of some liquid degrees of freedom through such weak first-order transitions is then related to the polyamorphic transition between native and denatured hydrated proteins, since the latter are also glass-forming systems--water-plasticized, hydrogen bond-cross-linked chain polymers (and single molecule glass formers). The circle is closed with a final new twist by noting that a short time scale phenomenon much studied by protein physicists-namely, the onset of a sharp change in d
Resumo:
Caption title, Dec. 1934-
Resumo:
Contents: v. 3. The glorious teachings of our holy religion
Resumo:
Includes tables and diagrams.
Resumo:
Mode of access: Internet.