962 resultados para Optimal network configuration


Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper a novel dual-band single circular polarization antenna feeding network for satellite communications is presented. The novel antenna feed chain1 is composed of two elements or subsystems, namely a diplexer and a bi-phase polarizer. In comparison with the classic topology based on an orthomode transducer and a dual-band polarizer, the proposed feed chain presents several advantages, such as compactness, modular design of the different components, broadband operation and versatility in the subsystems interconnection. The design procedure of this new antenna feed configuration is explained. Different examples of antenna feeding networks at 20/30 GHz are presented. It is pointed out the excellent results obtained in terms of isolation and axial ratio.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dynamic and Partial Reconfiguration (DPR) allows a system to be able to modify certain parts of itself during run-time. This feature gives rise to the capability of evolution: changing parts of the configuration according to the online evaluation of performance or other parameters. The evolution is achieved through a bio-inspired model in which the features of the system are identified as genes. The objective of the evolution may not be a single one; in this work, power consumption is taken into consideration, together with the quality of filtering, as the measure of performance, of a noisy image. Pareto optimality is applied to the evolutionary process, in order to find a representative set of optimal solutions as for performance and power consumption. The main contributions of this paper are: implementing an evolvable system on a low-power Spartan-6 FPGA included in a Wireless Sensor Network node and, by enabling the availability of a real measure of power consumption at run-time, achieving the capability of multi-objective evolution, that yields different optimal configurations, among which the selected one will depend on the relative “weights” of performance and power consumption.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

En la actualidad, la gestión de embalses para el control de avenidas se realiza, comúnmente, utilizando modelos de simulación. Esto se debe, principalmente, a su facilidad de uso en tiempo real por parte del operador de la presa. Se han desarrollado modelos de optimización de la gestión del embalse que, aunque mejoran los resultados de los modelos de simulación, su aplicación en tiempo real se hace muy difícil o simplemente inviable, pues está limitada al conocimiento de la avenida futura que entra al embalse antes de tomar la decisión de vertido. Por esta razón, se ha planteado el objetivo de desarrollar un modelo de gestión de embalses en avenidas que incorpore las ventajas de un modelo de optimización y que sea de fácil uso en tiempo real por parte del gestor de la presa. Para ello, se construyó un modelo de red Bayesiana que representa los procesos de la cuenca vertiente y del embalse y, que aprende de casos generados sintéticamente mediante un modelo hidrológico agregado y un modelo de optimización de la gestión del embalse. En una primera etapa, se generó un gran número de episodios sintéticos de avenida utilizando el método de Monte Carlo, para obtener las lluvias, y un modelo agregado compuesto de transformación lluvia- escorrentía, para obtener los hidrogramas de avenida. Posteriormente, se utilizaron las series obtenidas como señales de entrada al modelo de gestión de embalses PLEM, que optimiza una función objetivo de costes mediante programación lineal entera mixta, generando igual número de eventos óptimos de caudal vertido y de evolución de niveles en el embalse. Los episodios simulados fueron usados para entrenar y evaluar dos modelos de red Bayesiana, uno que pronostica el caudal de entrada al embalse, y otro que predice el caudal vertido, ambos en un horizonte de tiempo que va desde una a cinco horas, en intervalos de una hora. En el caso de la red Bayesiana hidrológica, el caudal de entrada que se elige es el promedio de la distribución de probabilidad de pronóstico. En el caso de la red Bayesiana hidráulica, debido al comportamiento marcadamente no lineal de este proceso y a que la red Bayesiana devuelve un rango de posibles valores de caudal vertido, se ha desarrollado una metodología para seleccionar un único valor, que facilite el trabajo del operador de la presa. Esta metodología consiste en probar diversas estrategias propuestas, que incluyen zonificaciones y alternativas de selección de un único valor de caudal vertido en cada zonificación, a un conjunto suficiente de episodios sintéticos. Los resultados de cada estrategia se compararon con el método MEV, seleccionándose las estrategias que mejoran los resultados del MEV, en cuanto al caudal máximo vertido y el nivel máximo alcanzado por el embalse, cualquiera de las cuales puede usarse por el operador de la presa en tiempo real para el embalse de estudio (Talave). La metodología propuesta podría aplicarse a cualquier embalse aislado y, de esta manera, obtener, para ese embalse particular, diversas estrategias que mejoran los resultados del MEV. Finalmente, a modo de ejemplo, se ha aplicado la metodología a una avenida sintética, obteniendo el caudal vertido y el nivel del embalse en cada intervalo de tiempo, y se ha aplicado el modelo MIGEL para obtener en cada instante la configuración de apertura de los órganos de desagüe que evacuarán el caudal. Currently, the dam operator for the management of dams uses simulation models during flood events, mainly due to its ease of use in real time. Some models have been developed to optimize the management of the reservoir to improve the results of simulation models. However, real-time application becomes very difficult or simply unworkable, because the decision to discharge depends on the unknown future avenue entering the reservoir. For this reason, the main goal is to develop a model of reservoir management at avenues that incorporates the advantages of an optimization model. At the same time, it should be easy to use in real-time by the dam manager. For this purpose, a Bayesian network model has been developed to represent the processes of the watershed and reservoir. This model learns from cases generated synthetically by a hydrological model and an optimization model for managing the reservoir. In a first stage, a large number of synthetic flood events was generated using the Monte Carlo method, for rain, and rain-added processing model composed of runoff for the flood hydrographs. Subsequently, the series obtained were used as input signals to the reservoir management model PLEM that optimizes a target cost function using mixed integer linear programming. As a result, many optimal discharge rate events and water levels in the reservoir levels were generated. The simulated events were used to train and test two models of Bayesian network. The first one predicts the flow into the reservoir, and the second predicts the discharge flow. They work in a time horizon ranging from one to five hours, in intervals of an hour. In the case of hydrological Bayesian network, the chosen inflow is the average of the probability distribution forecast. In the case of hydraulic Bayesian network the highly non-linear behavior of this process results on a range of possible values of discharge flow. A methodology to select a single value has been developed to facilitate the dam operator work. This methodology tests various strategies proposed. They include zoning and alternative selection of a single value in each discharge rate zoning from a sufficient set of synthetic episodes. The results of each strategy are compared with the MEV method. The strategies that improve the outcomes of MEV are selected and can be used by the dam operator in real time applied to the reservoir study case (Talave). The methodology could be applied to any single reservoir and, thus, obtain, for the particular reservoir, various strategies that improve results from MEV. Finally, the methodology has been applied to a synthetic flood, obtaining the discharge flow and the reservoir level in each time interval. The open configuration floodgates to evacuate the flow at each interval have been obtained applying the MIGEL model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Along of this document the reader could find a suitable network design and solution for the Rally Championship of Ypres meeting all the requirements set by the organization of the rally. These requirements have brought many problems in accordance with the network standards, because the area where the boxes are located is pretty large nevertheless technologies to solve those problems are detailed in the project. It has been included different designs in the project, each one of them based on distinct characteristic as they could be efficient, performance… , and the most important, since the organization of the rally is non-profit , the budget. Nevertheless we didn’t dismiss the use of long-lasting devices, as CISCO devices, despite their price. Furthermore a configuration of routing/switching devices has been explained for those who will be commanded to implement this solution. This solution is design to supply internet access as well as video streaming to all boxes for what teams can follow the championship in live time. The maximum connection of internet service provider (ISP) is 160Mbps, this bandwidth has to be distributed for the boxes dynamically. Finally to ensure the network works out it has to be monitored, this is reachable by using network analysis tools which in this project Wireshark has been chosen. RESUMEN. A lo largo de este documento, el lector encontrara un posible diseño y una posible solución para la red local del circuito de Rally celebrado en Ypres, cumpliendo con todos los requisitos y especificaciones establecidos por la organización. Estos requisitos han causado problemas de conformidad con los estándares de la red, debido a que la zona donde se encuentran los Boxes de los equipos es bastante larga, sin embargo las tecnologías para resolver esos problemas se detallan en este proyecto. Se han incluido diferentes diseños, cada uno de ellos centrado en aspectos diferentes así como la eficacia, el rendimiento, el presupuesto, etc... Esta solución está diseñada para suministrar acceso a Internet, así como la transmisión dinámica de video a todos los equipos para que puedan seguir la competición en tiempo real. Finalmente para controlar y asegurar que la red funciona, será monitorizada mediante herramientas de análisis de redes (Wireshark).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La computación ubicua está extendiendo su aplicación desde entornos específicos hacia el uso cotidiano; el Internet de las cosas (IoT, en inglés) es el ejemplo más brillante de su aplicación y de la complejidad intrínseca que tiene, en comparación con el clásico desarrollo de aplicaciones. La principal característica que diferencia la computación ubicua de los otros tipos está en como se emplea la información de contexto. Las aplicaciones clásicas no usan en absoluto la información de contexto o usan sólo una pequeña parte de ella, integrándola de una forma ad hoc con una implementación específica para la aplicación. La motivación de este tratamiento particular se tiene que buscar en la dificultad de compartir el contexto con otras aplicaciones. En realidad lo que es información de contexto depende del tipo de aplicación: por poner un ejemplo, para un editor de imágenes, la imagen es la información y sus metadatos, tales como la hora de grabación o los ajustes de la cámara, son el contexto, mientras que para el sistema de ficheros la imagen junto con los ajustes de cámara son la información, y el contexto es representado por los metadatos externos al fichero como la fecha de modificación o la de último acceso. Esto significa que es difícil compartir la información de contexto, y la presencia de un middleware de comunicación que soporte el contexto de forma explícita simplifica el desarrollo de aplicaciones para computación ubicua. Al mismo tiempo el uso del contexto no tiene que ser obligatorio, porque si no se perdería la compatibilidad con las aplicaciones que no lo usan, convirtiendo así dicho middleware en un middleware de contexto. SilboPS, que es nuestra implementación de un sistema publicador/subscriptor basado en contenido e inspirado en SIENA [11, 9], resuelve dicho problema extendiendo el paradigma con dos elementos: el Contexto y la Función de Contexto. El contexto representa la información contextual propiamente dicha del mensaje por enviar o aquella requerida por el subscriptor para recibir notificaciones, mientras la función de contexto se evalúa usando el contexto del publicador y del subscriptor. Esto permite desacoplar la lógica de gestión del contexto de aquella de la función de contexto, incrementando de esta forma la flexibilidad de la comunicación entre varias aplicaciones. De hecho, al utilizar por defecto un contexto vacío, las aplicaciones clásicas y las que manejan el contexto pueden usar el mismo SilboPS, resolviendo de esta forma la incompatibilidad entre las dos categorías. En cualquier caso la posible incompatibilidad semántica sigue existiendo ya que depende de la interpretación que cada aplicación hace de los datos y no puede ser solucionada por una tercera parte agnóstica. El entorno IoT conlleva retos no sólo de contexto, sino también de escalabilidad. La cantidad de sensores, el volumen de datos que producen y la cantidad de aplicaciones que podrían estar interesadas en manipular esos datos está en continuo aumento. Hoy en día la respuesta a esa necesidad es la computación en la nube, pero requiere que las aplicaciones sean no sólo capaces de escalar, sino de hacerlo de forma elástica [22]. Desgraciadamente no hay ninguna primitiva de sistema distribuido de slicing que soporte un particionamiento del estado interno [33] junto con un cambio en caliente, además de que los sistemas cloud actuales como OpenStack u OpenNebula no ofrecen directamente una monitorización elástica. Esto implica que hay un problema bilateral: cómo puede una aplicación escalar de forma elástica y cómo monitorizar esa aplicación para saber cuándo escalarla horizontalmente. E-SilboPS es la versión elástica de SilboPS y se adapta perfectamente como solución para el problema de monitorización, gracias al paradigma publicador/subscriptor basado en contenido y, a diferencia de otras soluciones [5], permite escalar eficientemente, para cumplir con la carga de trabajo sin sobre-provisionar o sub-provisionar recursos. Además está basado en un algoritmo recientemente diseñado que muestra como añadir elasticidad a una aplicación con distintas restricciones sobre el estado: sin estado, estado aislado con coordinación externa y estado compartido con coordinación general. Su evaluación enseña como se pueden conseguir notables speedups, siendo el nivel de red el principal factor limitante: de hecho la eficiencia calculada (ver Figura 5.8) demuestra cómo se comporta cada configuración en comparación con las adyacentes. Esto permite conocer la tendencia actual de todo el sistema, para saber si la siguiente configuración compensará el coste que tiene con la ganancia que lleva en el throughput de notificaciones. Se tiene que prestar especial atención en la evaluación de los despliegues con igual coste, para ver cuál es la mejor solución en relación a una carga de trabajo dada. Como último análisis se ha estimado el overhead introducido por las distintas configuraciones a fin de identificar el principal factor limitante del throughput. Esto ayuda a determinar la parte secuencial y el overhead de base [26] en un despliegue óptimo en comparación con uno subóptimo. Efectivamente, según el tipo de carga de trabajo, la estimación puede ser tan baja como el 10 % para un óptimo local o tan alta como el 60 %: esto ocurre cuando se despliega una configuración sobredimensionada para la carga de trabajo. Esta estimación de la métrica de Karp-Flatt es importante para el sistema de gestión porque le permite conocer en que dirección (ampliar o reducir) es necesario cambiar el despliegue para mejorar sus prestaciones, en lugar que usar simplemente una política de ampliación. ABSTRACT The application of pervasive computing is extending from field-specific to everyday use. The Internet of Things (IoT) is the shiniest example of its application and of its intrinsic complexity compared with classical application development. The main characteristic that differentiates pervasive from other forms of computing lies in the use of contextual information. Some classical applications do not use any contextual information whatsoever. Others, on the other hand, use only part of the contextual information, which is integrated in an ad hoc fashion using an application-specific implementation. This information is handled in a one-off manner because of the difficulty of sharing context across applications. As a matter of fact, the application type determines what the contextual information is. For instance, for an imaging editor, the image is the information and its meta-data, like the time of the shot or camera settings, are the context, whereas, for a file-system application, the image, including its camera settings, is the information and the meta-data external to the file, like the modification date or the last accessed timestamps, constitute the context. This means that contextual information is hard to share. A communication middleware that supports context decidedly eases application development in pervasive computing. However, the use of context should not be mandatory; otherwise, the communication middleware would be reduced to a context middleware and no longer be compatible with non-context-aware applications. SilboPS, our implementation of content-based publish/subscribe inspired by SIENA [11, 9], solves this problem by adding two new elements to the paradigm: the context and the context function. Context represents the actual contextual information specific to the message to be sent or that needs to be notified to the subscriber, whereas the context function is evaluated using the publisher’s context and the subscriber’s context to decide whether the current message and context are useful for the subscriber. In this manner, context logic management is decoupled from context management, increasing the flexibility of communication and usage across different applications. Since the default context is empty, context-aware and classical applications can use the same SilboPS, resolving the syntactic mismatch that there is between the two categories. In any case, the possible semantic mismatch is still present because it depends on how each application interprets the data, and it cannot be resolved by an agnostic third party. The IoT environment introduces not only context but scaling challenges too. The number of sensors, the volume of the data that they produce and the number of applications that could be interested in harvesting such data are growing all the time. Today’s response to the above need is cloud computing. However, cloud computing applications need to be able to scale elastically [22]. Unfortunately there is no slicing, as distributed system primitives that support internal state partitioning [33] and hot swapping and current cloud systems like OpenStack or OpenNebula do not provide elastic monitoring out of the box. This means there is a two-sided problem: 1) how to scale an application elastically and 2) how to monitor the application and know when it should scale in or out. E-SilboPS is the elastic version of SilboPS. I t is the solution for the monitoring problem thanks to its content-based publish/subscribe nature and, unlike other solutions [5], it scales efficiently so as to meet workload demand without overprovisioning or underprovisioning. Additionally, it is based on a newly designed algorithm that shows how to add elasticity in an application with different state constraints: stateless, isolated stateful with external coordination and shared stateful with general coordination. Its evaluation shows that it is able to achieve remarkable speedups where the network layer is the main limiting factor: the calculated efficiency (see Figure 5.8) shows how each configuration performs with respect to adjacent configurations. This provides insight into the actual trending of the whole system in order to predict if the next configuration would offset its cost against the resulting gain in notification throughput. Particular attention has been paid to the evaluation of same-cost deployments in order to find out which one is the best for the given workload demand. Finally, the overhead introduced by the different configurations has been estimated to identify the primary limiting factor for throughput. This helps to determine the intrinsic sequential part and base overhead [26] of an optimal versus a suboptimal deployment. Depending on the type of workload, this can be as low as 10% in a local optimum or as high as 60% when an overprovisioned configuration is deployed for a given workload demand. This Karp-Flatt metric estimation is important for system management because it indicates the direction (scale in or out) in which the deployment has to be changed in order to improve its performance instead of simply using a scale-out policy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Planning a goal-directed sequence of behavior is a higher function of the human brain that relies on the integrity of prefrontal cortical areas. In the Tower of London test, a puzzle in which beads sliding on pegs must be moved to match a designated goal configuration, patients with lesioned prefrontal cortex show deficits in planning a goal-directed sequence of moves. We propose a neuronal network model of sequence planning that passes this test and, when lesioned, fails in a way that mimics prefrontal patients’ behavior. Our model comprises a descending planning system with hierarchically organized plan, operation, and gesture levels, and an ascending evaluative system that analyzes the problem and computes internal reward signals that index the correct/erroneous status of the plan. Multiple parallel pathways connecting the evaluative and planning systems amend the plan and adapt it to the current problem. The model illustrates how specialized hierarchically organized neuronal assemblies may collectively emulate central executive or supervisory functions of the human brain.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cartilage matrix protein (CMP) is the prototype of the newly discovered matrilin family, all of which contain von Willebrand factor A domains. Although the function of matrilins remain unclear, we have shown that, in primary chondrocyte cultures, CMP (matrilin-1) forms a filamentous network, which is made up of two types of filaments, a collagen-dependent one and a collagen-independent one. In this study, we demonstrate that the collagen-independent CMP filaments are enriched in pericellular compartments, extending directly from chondrocyte membranes. Their morphology can be distinguished from that of collagen filaments by immunogold electron microscopy, and mimicked by that of self-assembled purified CMP. The assembly of CMP filaments can occur from transfection of a wild-type CMP transgene alone in skin fibroblasts, which do not produce endogenous CMP. Conversely, assembly of endogenous CMP filaments by chondrocytes can be inhibited specifically by dominant negative CMP transgenes. The two A domains within CMP serve essential but different functions during network formation. Deletion of the A2 domain converts the trimeric CMP into a mixture of monomers, dimers, and trimers, whereas deletion of the A1 domain does not affect the trimeric configuration. This suggests that the A2 domain modulates multimerization of CMP. Absence of either A domain from CMP abolishes its ability to form collagen-independent filaments. In particular, Asp22 in A1 and Asp255 in A2 are essential; double point mutation of these residues disrupts CMP network formation. These residues are part of the metal ion–dependent adhesion sites, thus a metal ion–dependent adhesion site–mediated adhesion mechanism may be applicable to matrilin assembly. Taken together, our data suggest that CMP is a bridging molecule that connects matrix components in cartilage to form an integrated matrix network.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Single photon emission with computed tomography (SPECT) hexamethylphenylethyleneamineoxime technetium-99 images were analyzed by an optimal interpolative neural network (OINN) algorithm to determine whether the network could discriminate among clinically diagnosed groups of elderly normal, Alzheimer disease (AD), and vascular dementia (VD) subjects. After initial image preprocessing and registration, image features were obtained that were representative of the mean regional tissue uptake. These features were extracted from a given image by averaging the intensities over various regions defined by suitable masks. After training, the network classified independent trials of patients whose clinical diagnoses conformed to published criteria for probable AD or probable/possible VD. For the SPECT data used in the current tests, the OINN agreement was 80 and 86% for probable AD and probable/possible VD, respectively. These results suggest that artificial neural network methods offer potential in diagnoses from brain images and possibly in other areas of scientific research where complex patterns of data may have scientifically meaningful groupings that are not easily identifiable by the researcher.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The optimal integration between heat and work may significantly reduce the energy demand and consequently the process cost. This paper introduces a new mathematical model for the simultaneous synthesis of heat exchanger networks (HENs) in which the pressure levels of the process streams can be adjusted to enhance the heat integration. A superstructure is proposed for the HEN design with pressure recovery, developed via generalized disjunctive programming (GDP) and mixed-integer nonlinear programming (MINLP) formulation. The process conditions (stream temperature and pressure) must be optimized. Furthermore, the approach allows for coupling of the turbines and compressors and selection of the turbines and valves to minimize the total annualized cost, which consists of the operational and capital expenses. The model is tested for its applicability in three case studies, including a cryogenic application. The results indicate that the energy integration reduces the quantity of utilities required, thus decreasing the overall cost.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper reports on the development of an artificial neural network (ANN) method to detect laminar defects following the pattern matching approach utilizing dynamic measurement. Although structural health monitoring (SHM) using ANN has attracted much attention in the last decade, the problem of how to select the optimal class of ANN models has not been investigated in great depth. It turns out that the lack of a rigorous ANN design methodology is one of the main reasons for the delay in the successful application of the promising technique in SHM. In this paper, a Bayesian method is applied in the selection of the optimal class of ANN models for a given set of input/target training data. The ANN design method is demonstrated for the case of the detection and characterisation of laminar defects in carbon fibre-reinforced beams using flexural vibration data for beams with and without non-symmetric delamination damage.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We have proposed a novel robust inversion-based neurocontroller that searches for the optimal control law by sampling from the estimated Gaussian distribution of the inverse plant model. However, for problems involving the prediction of continuous variables, a Gaussian model approximation provides only a very limited description of the properties of the inverse model. This is usually the case for problems in which the mapping to be learned is multi-valued or involves hysteritic transfer characteristics. This often arises in the solution of inverse plant models. In order to obtain a complete description of the inverse model, a more general multicomponent distributions must be modeled. In this paper we test whether our proposed sampling approach can be used when considering an arbitrary conditional probability distributions. These arbitrary distributions will be modeled by a mixture density network. Importance sampling provides a structured and principled approach to constrain the complexity of the search space for the ideal control law. The effectiveness of the importance sampling from an arbitrary conditional probability distribution will be demonstrated using a simple single input single output static nonlinear system with hysteretic characteristics in the inverse plant model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Logistics distribution network design is one of the major decision problems arising in contemporary supply chain management. The decision involves many quantitative and qualitative factors that may be conflicting in nature. This paper applies an integrated multiple criteria decision making approach to design an optimal distribution network. In the approach, the analytic hierarchy process (AHP) is used first to determine the relative importance weightings or priorities of alternative warehouses with respect to both deliverer oriented and customer oriented criteria. Then, the goal programming (GP) model incorporating the constraints of system, resource, and AHP priority is formulated to select the best set of warehouses without exceeding the limited available resources. In this paper, two commercial packages are used: Expert Choice for determining the AHP priorities of the warehouses, and LINDO for solving the GP model. © 2007 IEEE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work reports the developnent of a mathenatical model and distributed, multi variable computer-control for a pilot plant double-effect climbing-film evaporator. A distributed-parameter model of the plant has been developed and the time-domain model transformed into the Laplace domain. The model has been further transformed into an integral domain conforming to an algebraic ring of polynomials, to eliminate the transcendental terms which arise in the Laplace domain due to the distributed nature of the plant model. This has made possible the application of linear control theories to a set of linear-partial differential equations. The models obtained have well tracked the experimental results of the plant. A distributed-computer network has been interfaced with the plant to implement digital controllers in a hierarchical structure. A modern rnultivariable Wiener-Hopf controller has been applled to the plant model. The application has revealed a limitation condition that the plant matrix should be positive-definite along the infinite frequency axis. A new multi variable control theory has emerged fram this study, which avoids the above limitation. The controller has the structure of the modern Wiener-Hopf controller, but with a unique feature enabling a designer to specify the closed-loop poles in advance and to shape the sensitivity matrix as required. In this way, the method treats directly the interaction problems found in the chemical processes with good tracking and regulation performances. Though the ability of the analytical design methods to determine once and for all whether a given set of specifications can be met is one of its chief advantages over the conventional trial-and-error design procedures. However, one disadvantage that offsets to some degree the enormous advantages is the relatively complicated algebra that must be employed in working out all but the simplest problem. Mathematical algorithms and computer software have been developed to treat some of the mathematical operations defined over the integral domain, such as matrix fraction description, spectral factorization, the Bezout identity, and the general manipulation of polynomial matrices. Hence, the design problems of Wiener-Hopf type of controllers and other similar algebraic design methods can be easily solved.