920 resultados para Distributed Control Problems


Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a detailed analysis of the application of a multi-scale Hierarchical Reconstruction method for solving a family of ill-posed linear inverse problems. When the observations on the unknown quantity of interest and the observation operators are known, these inverse problems are concerned with the recovery of the unknown from its observations. Although the observation operators we consider are linear, they are inevitably ill-posed in various ways. We recall in this context the classical Tikhonov regularization method with a stabilizing function which targets the specific ill-posedness from the observation operators and preserves desired features of the unknown. Having studied the mechanism of the Tikhonov regularization, we propose a multi-scale generalization to the Tikhonov regularization method, so-called the Hierarchical Reconstruction (HR) method. First introduction of the HR method can be traced back to the Hierarchical Decomposition method in Image Processing. The HR method successively extracts information from the previous hierarchical residual to the current hierarchical term at a finer hierarchical scale. As the sum of all the hierarchical terms, the hierarchical sum from the HR method provides an reasonable approximate solution to the unknown, when the observation matrix satisfies certain conditions with specific stabilizing functions. When compared to the Tikhonov regularization method on solving the same inverse problems, the HR method is shown to be able to decrease the total number of iterations, reduce the approximation error, and offer self control of the approximation distance between the hierarchical sum and the unknown, thanks to using a ladder of finitely many hierarchical scales. We report numerical experiments supporting our claims on these advantages the HR method has over the Tikhonov regularization method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There is a growning need to address psychological health and safety in the workplace. Ergonomics tends to be widely recognized for its physical applications, such as ¨office¨ and ¨manual materials handling¨ however the other domains of specialization of ergonomics (cognitive and organizational) appear to be less well known. This study evaluates the level of understanding that professionals who practice ergonomics have of the relation between ergonomics and the control of psychosocial hazards in the workplace. A survey was distributed to ergonomics practitioners and asked them about their awareness of the relation between ergonomics and workplace psychosocial hazard control. Ergonomists and human factors specialists demonstrated a greater awareness of this relationship than other allied occupational groups that also practice ergonomics, however they indicated that there may be difficulties in the “real world” applying these areas of knowledge into practice. Participants who demonstrated a high level of awareness of the relation between ergonomics and psychosocial hazard control demonstrated stronger organizational commitment than participants with a low awareness. Ergonomics practitioners who reported having employer support for professional development also demonstrated a higher degree of awareness of the relation between ergonomics and psychosocial hazard control, as did the professionals who had been practicing in the field the longest. This research provides some insight for professional associations for Ergonomists, employers of Ergonomists, and human resource professionals about how ergonomics practitioners perceive the ergonomics field and the profession as well as their employing organization.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

One challenge on data assimilation (DA) methods is how the error covariance for the model state is computed. Ensemble methods have been proposed for producing error covariance estimates, as error is propagated in time using the non-linear model. Variational methods, on the other hand, use the concepts of control theory, whereby the state estimate is optimized from both the background and the measurements. Numerical optimization schemes are applied which solve the problem of memory storage and huge matrix inversion needed by classical Kalman filter methods. Variational Ensemble Kalman filter (VEnKF), as a method inspired the Variational Kalman Filter (VKF), enjoys the benefits from both ensemble methods and variational methods. It avoids filter inbreeding problems which emerge when the ensemble spread underestimates the true error covariance. In VEnKF this is tackled by resampling the ensemble every time measurements are available. One advantage of VEnKF over VKF is that it needs neither tangent linear code nor adjoint code. In this thesis, VEnKF has been applied to a two-dimensional shallow water model simulating a dam-break experiment. The model is a public code with water height measurements recorded in seven stations along the 21:2 m long 1:4 m wide flume’s mid-line. Because the data were too sparse to assimilate the 30 171 model state vector, we chose to interpolate the data both in time and in space. The results of the assimilation were compared with that of a pure simulation. We have found that the results revealed by the VEnKF were more realistic, without numerical artifacts present in the pure simulation. Creating a wrapper code for a model and DA scheme might be challenging, especially when the two were designed independently or are poorly documented. In this thesis we have presented a non-intrusive approach of coupling the model and a DA scheme. An external program is used to send and receive information between the model and DA procedure using files. The advantage of this method is that the model code changes needed are minimal, only a few lines which facilitate input and output. Apart from being simple to coupling, the approach can be employed even if the two were written in different programming languages, because the communication is not through code. The non-intrusive approach is made to accommodate parallel computing by just telling the control program to wait until all the processes have ended before the DA procedure is invoked. It is worth mentioning the overhead increase caused by the approach, as at every assimilation cycle both the model and the DA procedure have to be initialized. Nonetheless, the method can be an ideal approach for a benchmark platform in testing DA methods. The non-intrusive VEnKF has been applied to a multi-purpose hydrodynamic model COHERENS to assimilate Total Suspended Matter (TSM) in lake Säkylän Pyhäjärvi. The lake has an area of 154 km2 with an average depth of 5:4 m. Turbidity and chlorophyll-a concentrations from MERIS satellite images for 7 days between May 16 and July 6 2009 were available. The effect of the organic matter has been computationally eliminated to obtain TSM data. Because of computational demands from both COHERENS and VEnKF, we have chosen to use 1 km grid resolution. The results of the VEnKF have been compared with the measurements recorded at an automatic station located at the North-Western part of the lake. However, due to TSM data sparsity in both time and space, it could not be well matched. The use of multiple automatic stations with real time data is important to elude the time sparsity problem. With DA, this will help in better understanding the environmental hazard variables for instance. We have found that using a very high ensemble size does not necessarily improve the results, because there is a limit whereby additional ensemble members add very little to the performance. Successful implementation of the non-intrusive VEnKF and the ensemble size limit for performance leads to an emerging area of Reduced Order Modeling (ROM). To save computational resources, running full-blown model in ROM is avoided. When the ROM is applied with the non-intrusive DA approach, it might result in a cheaper algorithm that will relax computation challenges existing in the field of modelling and DA.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we consider the a posteriori and a priori error analysis of discontinuous Galerkin interior penalty methods for second-order partial differential equations with nonnegative characteristic form on anisotropically refined computational meshes. In particular, we discuss the question of error estimation for linear target functionals, such as the outflow flux and the local average of the solution. Based on our a posteriori error bound we design and implement the corresponding adaptive algorithm to ensure reliable and efficient control of the error in the prescribed functional to within a given tolerance. This involves exploiting both local isotropic and anisotropic mesh refinement. The theoretical results are illustrated by a series of numerical experiments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper combines the idea of a hierarchical distributed genetic algorithm with different inter-agent partnering strategies. Cascading clusters of sub-populations are built from bottom up, with higher-level sub-populations optimising larger parts of the problem. Hence higher-level sub-populations search a larger search space with a lower resolution whilst lower-level sub-populations search a smaller search space with a higher resolution. The effects of different partner selection schemes amongst the agents on solution quality are examined for two multiple-choice optimisation problems. It is shown that partnering strategies that exploit problem-specific knowledge are superior and can counter inappropriate (sub-) fitness measurements.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Internet has grown in size at rapid rates since BGP records began, and continues to do so. This has raised concerns about the scalability of the current BGP routing system, as the routing state at each router in a shortest-path routing protocol will grow at a supra-linearly rate as the network grows. The concerns are that the memory capacity of routers will not be able to keep up with demands, and that the growth of the Internet will become ever more cramped as more and more of the world seeks the benefits of being connected. Compact routing schemes, where the routing state grows only sub-linearly relative to the growth of the network, could solve this problem and ensure that router memory would not be a bottleneck to Internet growth. These schemes trade away shortest-path routing for scalable memory state, by allowing some paths to have a certain amount of bounded “stretch”. The most promising such scheme is Cowen Routing, which can provide scalable, compact routing state for Internet routing, while still providing shortest-path routing to nearly all other nodes, with only slightly stretched paths to a very small subset of the network. Currently, there is no fully distributed form of Cowen Routing that would be practical for the Internet. This dissertation describes a fully distributed and compact protocol for Cowen routing, using the k-core graph decomposition. Previous compact routing work showed the k-core graph decomposition is useful for Cowen Routing on the Internet, but no distributed form existed. This dissertation gives a distributed k-core algorithm optimised to be efficient on dynamic graphs, along with with proofs of its correctness. The performance and efficiency of this distributed k-core algorithm is evaluated on large, Internet AS graphs, with excellent results. This dissertation then goes on to describe a fully distributed and compact Cowen Routing protocol. This protocol being comprised of a landmark selection process for Cowen Routing using the k-core algorithm, with mechanisms to ensure compact state at all times, including at bootstrap; a local cluster routing process, with mechanisms for policy application and control of cluster sizes, ensuring again that state can remain compact at all times; and a landmark routing process is described with a prioritisation mechanism for announcements that ensures compact state at all times.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A Refinaria de Matosinhos é um dos complexos industriais da Galp Energia. A sua estação de tratamento de águas residuais industriais (ETARI) – designada internamente por Unidade 7000 – é composta por quatro tratamentos: o pré-tratamento, o tratamento físico-químico, o tratamento biológico e o pós-tratamento. Dada a interligação existente, é fundamental a otimização de cada um dos tratamentos. Este trabalho teve como objetivos a identificação dos problemas e/ou possibilidades de melhoria do pré-tratamento, tratamento físico-químico e pós-tratamento e principalmente a otimização do tratamento biológico da ETARI. No pré-tratamento verificou-se que a separação de óleos e lamas não era eficaz uma vez que se formam emulsões destas duas fases. Como solução, sugeriu-se a adição de agentes desemulsionantes, que se revelou economicamente inviável. Assim, sugeriu-se como alternativa o recurso a técnicas de tratamento da emulsão gerada, tais como a extração com solvente, centrifugação, ultrassons e micro-ondas. No tratamento físico-químico constatou-se que o controlo da unidade de saturação de ar na água era feito com base na análise visual dos operadores, o que pode conduzir a condições de operação afastadas das ótimas para este tratamento. Assim, sugeriu-se a realização de um estudo de otimização desta unidade com vista à determinação da razão ar/sólidos ótima para este efluente. Para além disto, constatou-se, ainda, que os consumos de coagulante aumentaram cerca de -- % no último ano, pelo que foi sugerido o estudo da viabilidade do processo de eletrocoagulação como substituto do sistema de coagulação existente. No pós-tratamento identificou-se o processo de lavagem dos filtros como sendo a etapa com possibilidade de ser otimizada. Através de um estudo preliminar concluiu-se que a lavagem contínua de um filtro por cada turno melhorava o desempenho dos mesmos. Constatou-se, ainda, que a introdução de ar comprimido na água de lavagem promove uma maior remoção de detritos do leito de areia, no entanto esta prática parece influenciar negativamente o desempenho dos filtros. No caso do tratamento biológico, identificaram-se problemas ao nível do tempo de retenção hidráulico do tratamento biológico II, que apresentou elevada variabilidade. Apesar de identificado concluiu-se que este problema era de difícil solução. Verificou-se, também, que o oxigénio dissolvido não era monitorizado, pelo que se sugeriu a instalação de uma sonda de oxigénio dissolvido numa zona de baixa turbulência do tanque de arejamento. Concluiu-se que o oxigénio era distribuído de forma homogénea por todo o tanque de arejamento e tentou-se identificar quais os fatores que influenciariam este parâmetro, no entanto, dada a elevada variabilidade do efluente e das condições de tratamento, tal não foi possível. Constatou-se, também, que o doseamento de fosfato para o tratamento biológico II era pouco eficiente já Otimização dos sistemas biológicos e melhorias nos tratamentos da ETARI da Refinaria de Matosinhos que em -- % dos dias se verificaram níveis baixos de fosfato no licor misto (< - mg/L). Foi, por isso, proposta a alteração do atual sistema de doseamento por gravidade para um sistema de bomba doseadora. Para além disso verificou-se que os consumos deste nutriente aumentaram significativamente no último ano (cerca de --%), situação que se constatou estar relacionada com um aumento da população microbiana para este período. Foi possível relacionar-se o aparecimento frequente de lamas à superfície dos decantadores secundários com incrementos repentinos de condutividade, pelo que se sugeriu o armazenamento do efluente nas bacias de tempestade, nestas situações. Verificou-se que a remoção de azoto era praticamente ineficaz uma vez que a conversão de azoto amoniacal em nitratos foi muito baixa. Assim, sugeriu-se o recurso à técnica de bio-augmentação ou a transformação do sistema de lamas ativadas num sistema bietápico. Por fim, constatou-se que a temperatura do efluente à entrada da ETARI apresenta valores bastante elevados para o tratamento biológico (aproximadamente de --º C) pelo que se sugeriu a instalação de uma sonda de temperatura no tanque de arejamento de modo a controlar de forma mais eficaz a temperatura do licor misto. Ainda no que diz respeito ao tratamento biológico, foi possível desenvolver-se um conjunto de ferramentas que visaram o funcionamento otimizado deste tratamento. Nesse sentido, foram apresentadas várias sugestões de melhoria: a utilização do índice volumétrico de lamas como indicador da qualidade das lamas em alternativa à percentagem de lamas; foi desenvolvido um conjunto de fluxogramas para a orientação dos operadores de exterior na resolução de problemas; foi criada uma “janela de operação” que pretende ser um guia de apoio à operação; foi ainda proposta a monitorização frequente da idade das lamas e da razão alimento/microrganismo.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper combines the idea of a hierarchical distributed genetic algorithm with different inter-agent partnering strategies. Cascading clusters of sub-populations are built from bottom up, with higher-level sub-populations optimising larger parts of the problem. Hence higher-level sub-populations search a larger search space with a lower resolution whilst lower-level sub-populations search a smaller search space with a higher resolution. The effects of different partner selection schemes amongst the agents on solution quality are examined for two multiple-choice optimisation problems. It is shown that partnering strategies that exploit problem-specific knowledge are superior and can counter inappropriate (sub-) fitness measurements.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper combines the idea of a hierarchical distributed genetic algorithm with different inter-agent partnering strategies. Cascading clusters of sub-populations are built from bottom up, with higher-level sub-populations optimising larger parts of the problem. Hence, higher-level sub-populations search a larger search space with a lower resolution whilst lower-level sub-populations search a smaller search space with a higher resolution. The effects of different partner selection schemes amongst the agents on solution quality are examined for two multiple-choice optimisation problems. It is shown that partnering strategies that exploit problem-specific knowledge are superior and can counter inappropriate (sub-) fitness measurements.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Este artículo pretende mostrar algunos de los cambios sociales que se dieron en Medellín al finalizar el siglo XIX e iniciar XX, momento en el que la ciudad vivió un proceso de crecimiento urbano no experimentado antes y que provocó la progresiva adecuación de nuevos organismos e instituciones para suplir las demandas económicas y sociales que también se fueron incrementan­do. Organizar e higienizar el espacio urbano, acorde con las ideas de ciudad y de ciudadano que se estaban imple­mentando en Europa, fueron algunas de las metas que se establecieron para planear la ciudad y, a la vez, tratar de mitigar el impacto del crecimiento de­mográfico que acusaba problemáticas de tipo social. Casas de beneficencia, manicomios, orfelinatos, hospitales, cárceles, patronatos, teatros, universi­dades, entre otras, fueron algunas de aquellas alternativas que se idearon con el fin de prevenir estas problemá­ticas, así como también propendieron por moralizar, distraer y castigar los comportamientos que se calificaban de inadecuados por parte de las élites.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: The increased prevalence of foot and ankle pathologies in Rheumatic and Musculoskeletal diseases (RMDs) is well documented1, however the provision of foot & ankle (F&A) healthcare services for people with RMDs in Europe has not been evaluated. Objectives: To assess the current healthcare systems for providing foot & ankle healthcare services for people with RMDs in Europe. Methods: A survey was undertaken to evaluate current provision of F&A health care services for people with RMDs across Europe. A questionnaire was distributed to all 22 country presidents representing HP associations within EULAR. The questionnaire used was developed and piloted (in 7 countries) by the EULAR F&A Study Group, and structured to capture the provision and type of F&A services for people with RMDs. When the HP presidents felt unable to answer specific questions they were encouraged to consult a colleague who may be better placed to provide the answers. Results: Sixteen questionnaires were completed (Norway, Ireland, Sweden, Hungary, Netherlands, UK, Denmark, Portugal, Italy, Switzerland, Austria, France, Czech Republic, Spain, Belgium, Malta). Of the 16, 13 respondents indicated provision of F&A health care services in their country, but only three countries had services specialising in RMD-related F&A problems (Netherlands, UK, Malta). The professions providing the care for patients with RMD-related F&A problems were different depending on the pathology and the country (Table1). Podiatrists provided care for F&A pain and deformity problems in 11 countries, but provided F&A ulcer care in only 8 countriesConclusions: Only 3 countries have F&A health care services specialised to the needs of people with RMDs. The professions providing the care varied between countries, and also depended on the F&A pathology. Interestingly, F&A healthcare services were provided by professions that do not solely specialised in F&A care. Further research is needed to assess the variation of F&A healthcare services between and within European countries and the impact on healthcare of various F&A healthcare service designs. References: Woodburn, J. & Helliwell, P. Foot problems in rheumatology. Rheumatology 36, 932-934 (1997).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

International audience

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Supply chains are ubiquitous in any commercial delivery systems. The exchange of goods and services, from different supply points to distinct destinations scattered along a given geographical area, requires the management of stocks and vehicles fleets in order to minimize costs while maintaining good quality services. Even if the operating conditions remain constant over a given time horizon, managing a supply chain is a very complex task. Its complexity increases exponentially with both the number of network nodes and the dynamical operational changes. Moreover, the management system must be adaptive in order to easily cope with several disturbances such as machinery and vehicles breakdowns or changes in demand. This work proposes the use of a model predictive control paradigm in order to tackle the above referred issues. The obtained simulation results suggest that this strategy promotes an easy tasks rescheduling in case of disturbances or anticipated changes in operating conditions. © Springer International Publishing Switzerland 2017

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Every Argo data file submitted by a DAC for distribution on the GDAC has its format and data consistency checked by the Argo FileChecker. Two types of checks are applied: 1. Format checks. Ensures the file formats match the Argo standards precisely. 2. Data consistency checks. Additional data consistency checks are performed on a file after it passes the format checks. These checks do not duplicate any of the quality control checks performed elsewhere. These checks can be thought of as “sanity checks” to ensure that the data are consistent with each other. The data consistency checks enforce data standards and ensure that certain data values are reasonable and/or consistent with other information in the files. Examples of the “data standard” checks are the “mandatory parameters” defined for meta-data files and the technical parameter names in technical data files. Files with format or consistency errors are rejected by the GDAC and are not distributed. Less serious problems will generate warnings and the file will still be distributed on the GDAC. Reference Tables and Data Standards: Many of the consistency checks involve comparing the data to the published reference tables and data standards. These tables are documented in the User’s Manual. (The FileChecker implements “text versions” of these tables.)