885 resultados para Design problems
Resumo:
One of the major problems in the mass production of sugpo is how to obtain a constant supply of fry. Since ultimately it is the private sector which should produce the sugpo fry to fill the needs of the industry, the Barangay Hatchery Project under the Prawn Program of the Aquaculture Department of SEAFDEC has scaled down the hatchery technology from large tanks to a level which can be adopted by the private sector, especially in the villages, with a minimum of financial and technical inputs. This guide to small-scale hatchery operations is expected to generate more enthusiasm among fish farmers interested in venturing into sugpo culture.
Resumo:
A general framework for multi-criteria optimal design is presented which is well-suited for automated design of structural systems. A systematic computer-aided optimal design decision process is developed which allows the designer to rapidly evaluate and improve a proposed design by taking into account the major factors of interest related to different aspects such as design, construction, and operation.
The proposed optimal design process requires the selection of the most promising choice of design parameters taken from a large design space, based on an evaluation using specified criteria. The design parameters specify a particular design, and so they relate to member sizes, structural configuration, etc. The evaluation of the design uses performance parameters which may include structural response parameters, risks due to uncertain loads and modeling errors, construction and operating costs, etc. Preference functions are used to implement the design criteria in a "soft" form. These preference functions give a measure of the degree of satisfaction of each design criterion. The overall evaluation measure for a design is built up from the individual measures for each criterion through a preference combination rule. The goal of the optimal design process is to obtain a design that has the highest overall evaluation measure - an optimization problem.
Genetic algorithms are stochastic optimization methods that are based on evolutionary theory. They provide the exploration power necessary to explore high-dimensional search spaces to seek these optimal solutions. Two special genetic algorithms, hGA and vGA, are presented here for continuous and discrete optimization problems, respectively.
The methodology is demonstrated with several examples involving the design of truss and frame systems. These examples are solved by using the proposed hGA and vGA.
Resumo:
A neural network is a highly interconnected set of simple processors. The many connections allow information to travel rapidly through the network, and due to their simplicity, many processors in one network are feasible. Together these properties imply that we can build efficient massively parallel machines using neural networks. The primary problem is how do we specify the interconnections in a neural network. The various approaches developed so far such as outer product, learning algorithm, or energy function suffer from the following deficiencies: long training/ specification times; not guaranteed to work on all inputs; requires full connectivity.
Alternatively we discuss methods of using the topology and constraints of the problems themselves to design the topology and connections of the neural solution. We define several useful circuits-generalizations of the Winner-Take-All circuitthat allows us to incorporate constraints using feedback in a controlled manner. These circuits are proven to be stable, and to only converge on valid states. We use the Hopfield electronic model since this is close to an actual implementation. We also discuss methods for incorporating these circuits into larger systems, neural and nonneural. By exploiting regularities in our definition, we can construct efficient networks. To demonstrate the methods, we look to three problems from communications. We first discuss two applications to problems from circuit switching; finding routes in large multistage switches, and the call rearrangement problem. These show both, how we can use many neurons to build massively parallel machines, and how the Winner-Take-All circuits can simplify our designs.
Next we develop a solution to the contention arbitration problem of high-speed packet switches. We define a useful class of switching networks and then design a neural network to solve the contention arbitration problem for this class. Various aspects of the neural network/switch system are analyzed to measure the queueing performance of this method. Using the basic design, a feasible architecture for a large (1024-input) ATM packet switch is presented. Using the massive parallelism of neural networks, we can consider algorithms that were previously computationally unattainable. These now viable algorithms lead us to new perspectives on switch design.
Resumo:
This work concerns itself with the possibility of solutions, both cooperative and market based, to pollution abatement problems. In particular, we are interested in pollutant emissions in Southern California and possible solutions to the abatement problems enumerated in the 1990 Clean Air Act. A tradable pollution permit program has been implemented to reduce emissions, creating property rights associated with various pollutants.
Before we discuss the performance of market-based solutions to LA's pollution woes, we consider the existence of cooperative solutions. In Chapter 2, we examine pollutant emissions as a trans boundary public bad. We show that for a class of environments in which pollution moves in a bi-directional, acyclic manner, there exists a sustainable coalition structure and associated levels of emissions. We do so via a new core concept, one more appropriate to modeling cooperative emissions agreements (and potential defection from them) than the standard definitions.
However, this leaves the question of implementing pollution abatement programs unanswered. While the existence of a cost-effective permit market equilibrium has long been understood, the implementation of such programs has been difficult. The design of Los Angeles' REgional CLean Air Incentives Market (RECLAIM) alleviated some of the implementation problems, and in part exacerbated them. For example, it created two overlapping cycles of permits and two zones of permits for different geographic regions. While these design features create a market that allows some measure of regulatory control, they establish a very difficult trading environment with the potential for inefficiency arising from the transactions costs enumerated above and the illiquidity induced by the myriad assets and relatively few participants in this market.
It was with these concerns in mind that the ACE market (Automated Credit Exchange) was designed. The ACE market utilizes an iterated combined-value call market (CV Market). Before discussing the performance of the RECLAIM program in general and the ACE mechanism in particular, we test experimentally whether a portfolio trading mechanism can overcome market illiquidity. Chapter 3 experimentally demonstrates the ability of a portfolio trading mechanism to overcome portfolio rebalancing problems, thereby inducing sufficient liquidity for markets to fully equilibrate.
With experimental evidence in hand, we consider the CV Market's performance in the real world. We find that as the allocation of permits reduces to the level of historical emissions, prices are increasing. As of April of this year, prices are roughly equal to the cost of the Best Available Control Technology (BACT). This took longer than expected, due both to tendencies to mis-report emissions under the old regime, and abatement technology advances encouraged by the program. Vve also find that the ACE market provides liquidity where needed to encourage long-term planning on behalf of polluting facilities.
Resumo:
The centralized paradigm of a single controller and a single plant upon which modern control theory is built is no longer applicable to modern cyber-physical systems of interest, such as the power-grid, software defined networks or automated highways systems, as these are all large-scale and spatially distributed. Both the scale and the distributed nature of these systems has motivated the decentralization of control schemes into local sub-controllers that measure, exchange and act on locally available subsets of the globally available system information. This decentralization of control logic leads to different decision makers acting on asymmetric information sets, introduces the need for coordination between them, and perhaps not surprisingly makes the resulting optimal control problem much harder to solve. In fact, shortly after such questions were posed, it was realized that seemingly simple decentralized optimal control problems are computationally intractable to solve, with the Wistenhausen counterexample being a famous instance of this phenomenon. Spurred on by this perhaps discouraging result, a concerted 40 year effort to identify tractable classes of distributed optimal control problems culminated in the notion of quadratic invariance, which loosely states that if sub-controllers can exchange information with each other at least as quickly as the effect of their control actions propagates through the plant, then the resulting distributed optimal control problem admits a convex formulation.
The identification of quadratic invariance as an appropriate means of "convexifying" distributed optimal control problems led to a renewed enthusiasm in the controller synthesis community, resulting in a rich set of results over the past decade. The contributions of this thesis can be seen as being a part of this broader family of results, with a particular focus on closing the gap between theory and practice by relaxing or removing assumptions made in the traditional distributed optimal control framework. Our contributions are to the foundational theory of distributed optimal control, and fall under three broad categories, namely controller synthesis, architecture design and system identification.
We begin by providing two novel controller synthesis algorithms. The first is a solution to the distributed H-infinity optimal control problem subject to delay constraints, and provides the only known exact characterization of delay-constrained distributed controllers satisfying an H-infinity norm bound. The second is an explicit dynamic programming solution to a two player LQR state-feedback problem with varying delays. Accommodating varying delays represents an important first step in combining distributed optimal control theory with the area of Networked Control Systems that considers lossy channels in the feedback loop. Our next set of results are concerned with controller architecture design. When designing controllers for large-scale systems, the architectural aspects of the controller such as the placement of actuators, sensors, and the communication links between them can no longer be taken as given -- indeed the task of designing this architecture is now as important as the design of the control laws themselves. To address this task, we formulate the Regularization for Design (RFD) framework, which is a unifying computationally tractable approach, based on the model matching framework and atomic norm regularization, for the simultaneous co-design of a structured optimal controller and the architecture needed to implement it. Our final result is a contribution to distributed system identification. Traditional system identification techniques such as subspace identification are not computationally scalable, and destroy rather than leverage any a priori information about the system's interconnection structure. We argue that in the context of system identification, an essential building block of any scalable algorithm is the ability to estimate local dynamics within a large interconnected system. To that end we propose a promising heuristic for identifying the dynamics of a subsystem that is still connected to a large system. We exploit the fact that the transfer function of the local dynamics is low-order, but full-rank, while the transfer function of the global dynamics is high-order, but low-rank, to formulate this separation task as a nuclear norm minimization problem. Finally, we conclude with a brief discussion of future research directions, with a particular emphasis on how to incorporate the results of this thesis, and those of optimal control theory in general, into a broader theory of dynamics, control and optimization in layered architectures.
Resumo:
Many of British rivers hold stocks of salmon (Salmo salar L.) and sea trout (Salmo trutta L.) and during most of the year some of the adult fish migrate upstream to the head waters where, with the advent of winter, they will eventually spawn. For a variety of reasons, including the generation of power for milling, improving navigation and measuring water flow, man has put obstacles in the way of migratory fish which have added to those already provided by nature in the shape of rapids and waterfalls. While both salmon and sea trout, particularly the former, are capable of spectacular leaps the movement of fish over man-made and natural obstacles can be helped, or even made possible, by the judicious use of fish passes. These are designed to give the fish an easier route over or round an obstacle by allowing it to overcome the water head difference in a series of stages ('pool and traverse' fish pass) or by reducing the water velocity in a sloping channel (Denil fish pass). Salmon and sea trout make their spawning runs at different flow conditions, salmon preferring much higher water flows than sea trout. Hence the design of fish passes requires an understanding of the swimming ability of fish (speed and endurance) and the effect of water temperature on this ability. Also the unique features of each site must be appreciated to enable the pass to be positioned so that its entrance is readily located. As well as salmon and sea trout, rivers often have stocks of coarse fish and eels. Coarse fish migrations are generally local in character and although some obstructions such as weirs may allow downstream passages only, they do not cause a significant problem. Eels, like salmon and sea trout, travel both up and down river during the course of their life histories. However, the climbing power of elvers is legendary and it is not normally necessary to offer them help, while adult silver eels migrate at times of high water flow when downstream movement is comparatively easy: for these reasons neither coarse fish nor eels are considered further. The provision of fish passes is, in many instances, mandatory under the Salmon and Freshwater Fisheries Act 1975. This report is intended for those involved in the planning, siting, construction and operation of fish passes and is written to clarify the hydraulic problems for the biologist and the biological problems for the engineer. It is also intended to explain the criteria by which the design of an individual pass is assessed for Ministerial Approval.
Resumo:
This paper shows how computational techniques have been used to develop axi-symmetric, straight, sonic-line, minimum length micro nozzles that are suitable for laser micro-machining applications. Gas jets are used during laser micro-machining processing applications to shield the interaction zone between laser and workpiece material, and they determine the machining efficiency of such applications. The paper discusses the nature of laser-material interactions and the importance of using computational fluid dynamics to model pressure distributions in short nozzles that are used to deliver gas to the laser-material interaction zone. Experimental results are presented that highlight unique problems associated with laser micro machining using gas jets.
Resumo:
Almost all material selection problems require that a compromise be sought between some metric of performance and cost. Trade-off methods using utility functions allow optimal solutions to be found for two objective, but for three it is harder. This paper develops and demonstrates a method for dealing with three objectives.
Resumo:
Reclamações contra a sinalização viária existente no Rio de Janeiro são comuns. A cidade ainda é a porta de entrada do Brasil e o destino preferido dos visitantes. Dentro de pouco tempo, o Rio será palco de importantes eventos esportivos internacionais e há preocupação em como poderá a cidade oferecer orientação para os turistas que nela venham a transitar. Este estudo procura saber quais são, de fato, os motivos que justificam as incessantes queixas contra a sinalização instalada e procura extrair daí diretrizes que possam ser aplicadas aos projetos de sistemas de sinalização que efetivamente resolvam os problemas de orientação dos usuários. Para isso, procurou-se primeiro mapear o contexto histórico em que vem evoluindo a sinalização de trânsito; em seguida, examinou-se em que implica o desenvolvimento de projetos de sinalização em geral; para no próximo passo se focar questões da sinalização de trânsito. Foram feitos dois levantamentos: o primeiro, de entrevistas estruturadas individualizadas com taxistas, que são usuários intensos das vias e da sinalização. A amostra escolhida foi de 19 taxistas frequentadores da Praça Santos Dumont, localizada na Zona Sul da cidade, e um importante entroncamento de tráfego. O segundo envolveu cinco profissionais designers com o perfil de experiência prévia no desenvolvimento de projetos de sistemas de sinalização. Com esse grupo, a técnica utilizada foi a de Think Aloud Protocol, através da qual cada um desses indivíduos foi acompanhado e documentado enquanto dirigia e se orientava pela sinalização num trajeto que vai desde a citada Praça Santos Dumont até o Estádio do Maracanã, situado na Zona Norte da cidade, e que costuma ser um destino preferencial em eventos esportivos. Os resultados das duas pesquisas foram analisados e deles extraídas diretrizes que são apresentadas nas Conclusões e que objetivam a eficácia do sistema através de mensagens claras, textos legíveis, posicionamento oportuno e estabilidade formal visando o reconhecimento e o entendimento por parte do usuário.
Resumo:
A relação causal entre métodos de projeto e a usabilidade de produtos de comunicação e informação foi o tema desse estudo que buscou identificar o estado da arte sobre um processo de projeto que resulte em mais usabilidade na web. A partir dessa identificação, avaliou-se as melhorias que poderiam ser adotadas nos processos de desenvolvimento de interfaces utilizados por uma equipe específica da Fundação Oswaldo Cruz (CTIC - Fiocruz). Entendeu-se que um método de projeto deve estar atualizado em relação aos conhecimentos de áreas como a Ergonomia, a Interação Humano-computador e o Design de Interação. Para isso, adotou-se a hipótese de que o processo de projeto deve combinar três aspectos: a) um significativo envolvimento do usuário ao longo do processo; b) o uso de sucessivas iterações para configurar o produto e c) uma combinação mínima de técnicas relacionadas a objetivos específicos de cada fase de uma abordagem de Design Centrado no Usuário. Para contribuir com o desenvolvimento de métodos e técnicas que melhorem a usabilidade, descreveu-se as características dos métodos registrados na literatura e praticados por profissionais externos à Fundação Oswaldo Cruz (Fiocruz). A partir dessas informações, o estudo direcionou-se para o segundo objetivo específico: identificar melhorias nos métodos e técnicas aplicáveis no caso do CTIC Fiocruz. Através da combinação da revisão de literatura e da pesquisa de campo foram produzidas informações sobre tipos de fluxo dos métodos, tipos de envolvimento dos usuários, quantidade e gravidade de problemas de usabilidade observadas pelos profissionais e a validade de base geral de método para diferentes produtos. A primeira rodada de entrevista foi realizada para melhor entender o contexto da hipótese e a relação entre suas variáveis. A segunda rodada identificou as características do processo de projeto utilizado no CTIC. A partir dessas informações, aplicou-se duas técnicas com profissionais externos. Um questionário on-line foi utilizado para levantar informações bem específicas, em sua maioria de características quantitativas. A última técnica aplicada foi um card sorting on-line que apresentou um caso de projeto em que os profissionais indicaram quais técnicas seriam utilizadas diante de dois cenários diferentes: um mais favorável e outro restritivo. A análise demonstrou que a maioria dos profissionais acredita que os problemas de usabilidade são consequência da falta de determinadas abordagens e técnicas. Por isso, esses profissionais combinam fluxos iterativos com um significativo envolvimento do usuário no processo. Foram sugeridas melhorias para o método utilizado no CTIC sintetizadas através de um processo de Design Centrado no Usuário como ponto de partida que aplica o conceito tradicional de usabilidade (performance). Assim que possível, esse processo deve ser aperfeiçoado ao incluir o conceito de experiência do usuário que considera também os aspectos emocionais e hedonômicos na interação.
Muitiobjective pressurized water reactor reload core design by nondominated genetic algorithm search
Resumo:
The design of pressurized water reactor reload cores is not only a formidable optimization problem but also, in many instances, a multiobjective problem. A genetic algorithm (GA) designed to perform true multiobjective optimization on such problems is described. Genetic algorithms simulate natural evolution. They differ from most optimization techniques by searching from one group of solutions to another, rather than from one solution to another. New solutions are generated by breeding from existing solutions. By selecting better (in a multiobjective sense) solutions as parents more often, the population can be evolved to reveal the trade-off surface between the competing objectives. An example illustrating the effectiveness of this novel method is presented and analyzed. It is found that in solving a reload design problem the algorithm evaluates a similar number of loading patterns to other state-of-the-art methods, but in the process reveals much more information about the nature of the problem being solved. The actual computational cost incurred depends: on the core simulator used; the GA itself is code independent.
Resumo:
Many aerospace companies are currently making the transition to providing fully-integrated product-service offerings in which their products are designed from the outset with life-cycle considerations in mind. Based on a case study at Rolls-Royce, Civil Aerospace, this paper demonstrates how an interactive approach to process simulation can be used to support the redesign of existing design processes in order to incorporate life-cycle engineering (LCE) considerations. The case study provides insights into the problems of redesigning the conceptual stages of a complex, concurrent engineering design process and the practical value of process simulation as a tool to support the specification of process changes in the context of engineering design. The paper also illustrates how development of a simulation model can provide significant benefit to companies through the understanding of process behaviour that is gained through validating the behaviour of the model using different design and iteration scenarios. Keywords: jet engine design; life-cycle engineering; LCE; process change; design process simulation; applied signposting model; ASM. Copyright © 2011 Inderscience Enterprises Ltd.
Resumo:
Design work involves uncertainty that arises from, and influences, the progressive development of solutions. This paper analyses the influences of evolving uncertainty levels on the design process. We focus on uncertainties associated with choosing the values of design parameters, and do not consider in detail the issues that arise when parameters must first be identified. Aspects of uncertainty and its evolution are discussed, and a new task-based model is introduced to describe process behaviour in terms of changing uncertainty levels. The model is applied to study two process configuration problems based on aircraft wing design: one using an analytical solution and one using Monte-Carlo simulation. The applications show that modelling uncertainty levels during design can help assess management policies, such as how many concepts should be considered during design and to what level of accuracy. © 2011 Springer-Verlag.
Resumo:
Purpose: Advocates and critics of target-setting in the workplace seem unable to reach beyond their own well-entrenched battle lines. While the advocates of goal-directed behaviour point to what they see as demonstrable advantages, the critics of target-setting highlight equally demonstrable disadvantages. Indeed, the academic literature on this topic is currently mired in controversy, with neither side seemingly capable of envisaging a better way forward. This paper seeks to break the current deadlock and move thinking forward in this important aspect of performance measurement and management by outlining a new, more fruitful approach, based on both theory and practical experience. Design/methodology/approach: The topic was approached in three phases: assembling and reading key academic and other literature on the subject of target-setting and goal-directed behaviour, with a view to understanding, in depth, the arguments advanced by the advocates and critics of target-setting; comparing these published arguments with one's own experiential findings, in order to bring the essence of disagreement into much sharper focus; and then bringing to bear the academic and practical experience to identify the essential elements of a new, more fruitful approach offering all the benefits of goal-directed behaviour with none of the typical disadvantages of target-setting. Findings: The research led to three key findings: the advocates of goal-directed behaviour and critics of target-setting each make valid points, as seen from their own current perspectives; the likelihood of these two communities, left to themselves, ever reaching a new synthesis, seems vanishingly small (with leading thinkers in the goal-directed behaviour community already acknowledging this); and, between the three authors, it was discovered that their unusual combination of academic study and practical experience enabled them to see things differently. Hence, they would like to share their new thinking more widely. Research limitations/implications: The authors fully accept that their paper is informed by extensive practical experience and, as yet, there have been no opportunities to test their findings, conclusions and recommendations through rigorous academic research. However, they hope that the paper will move thinking forward in this arena, thereby informing future academic research. Practical implications: The authors hope that the practical implications of the paper will be significant, as it outlines a novel way for organisations to capture the benefits of goal-directed behaviour with none of the disadvantages typically associated with target-setting. Social implications: Given that increased efficiency and effectiveness in the management of organisations would be good for society, the authors think the paper has interesting social implications. Originality/value: Leading thinkers in the field of goal-directed behaviour, such as Locke and Latham, and leading critics of target-setting, such as Ordóñez et al. continue to argue with one another - much like, at the turn of the nineteenth century, proponents of the "wave theory of light" and proponents of the "particle theory of light" were similarly at loggerheads. Just as this furious scientific debate was ultimately resolved by Taylor's experiment, showing that light could behave both as a particle and wave at the same time, the authors believe that the paper demonstrates that goal-directed behaviour and target-setting can successfully co-exist. © Emerald Group Publishing Limited.
Resumo:
Life is full of difficult choices. Everyone has their own way of dealing with these, some effective, some not. The problem is particularly acute in engineering design because of the vast amount of information designers have to process. This paper deals with a subset of this set of problems: the subset of selecting materials and processes, and their links to the design of products. Even these, though, present many of the generic problems of choice, and the challenges in creating tools to assist the designer in making them. The key elements are those of classification, of indexing, of reaching decisions using incomplete data in many different formats, and of devising effective strategies for selection. This final element - that of selection strategies - poses particular challenges. Product design, as an example, is an intricate blend of the technical and (for want of a better word) the aesthetic. To meet these needs, a tool that allows selection by analysis, by analogy, by association and simply by 'browsing' is necessary. An example of such a tool, its successes and remaining challenges, will be described.