987 resultados para Implementation complexity
Resumo:
La implantació de Sistemes de Suport a la presa de Decisions (SSD) en Estacions Depuradores d'Aigües Residuals Urbanes (EDAR) facilita l'aplicació de tècniques més eficients basades en el coneixement per a la gestió del procés, assegurant la qualitat de l'aigua de sortida tot minimitzant el cost ambiental de la seva explotació. Els sistemes basats en el coneixement es caracteritzen per la seva capacitat de treballar amb dominis molt poc estructurats, i gran part de la informació rellevant de tipus qualitatiu i/o incerta. Precisament aquests són els trets característics que es poden trobar en els sistemes biològics de depuració, i en conseqüència en una EDAR. No obstant, l'elevada complexitat dels SSD fa molt costós el seu disseny, desenvolupament i aplicació en planta real, pel que resulta determinant la generació d'un protocol que faciliti la seva exportació a EDARs de tecnologia similar. L'objectiu del present treball de Tesi és precisament el desenvolupament d'un protocol que faciliti l'exportació sistemàtica de SSD i l'aprofitament del coneixement del procés prèviament adquirit. El treball es desenvolupa en base al cas d'estudi resultant de l'exportació a l'EDAR Montornès del prototipus original de SSD implementat a l'EDAR Granollers. Aquest SSD integra dos tipus de sistemes basats en el coneixement, concretament els sistemes basats en regles (els quals són programes informàtics que emulen el raonament humà i la seva capacitat de solucionar problemes utilitzant les mateixes fonts d'informació) i els sistemes de raonament basats en casos (els quals són programes informàtics basats en el coneixement que volen solucionar les situacions anormals que pateix la planta en el moment actual mitjançant el record de l'acció efectuada en una situació passada similar). El treball està estructurat en diferents capítols, en el primer dels quals, el lector s'introdueix en el món dels sistemes de suport a la decisió i en el domini de la depuració d'aigües. Seguidament es fixen els objectius i es descriuen els materials i mètodes utilitzats. A continuació es presenta el prototipus de SSD desenvolupat per la EDAR Granollers. Una vegada el prototipus ha estat presentat es descriu el primer protocol plantejat pel mateix autor de la Tesi en el seu Treball de Recerca. A continuació es presenten els resultats obtinguts en l'aplicació pràctica del protocol per generar un nou SSD, per una planta depuradora diferent, partint del prototipus. L'aplicació pràctica del protocol permet l'evolució del mateix cap a un millor pla d'exportació. Finalment, es pot concloure que el nou protocol redueix el temps necessari per realitzar el procés d'exportació, tot i que el nombre de passos necessaris ha augmentat, la qual cosa significa que el nou protocol és més sistemàtic.
Resumo:
The characteristics of service independence and flexibility of ATM networks make the control problems of such networks very critical. One of the main challenges in ATM networks is to design traffic control mechanisms that enable both economically efficient use of the network resources and desired quality of service to higher layer applications. Window flow control mechanisms of traditional packet switched networks are not well suited to real time services, at the speeds envisaged for the future networks. In this work, the utilisation of the Probability of Congestion (PC) as a bandwidth decision parameter is presented. The validity of PC utilisation is compared with QOS parameters in buffer-less environments when only the cell loss ratio (CLR) parameter is relevant. The convolution algorithm is a good solution for CAC in ATM networks with small buffers. If the source characteristics are known, the actual CLR can be very well estimated. Furthermore, this estimation is always conservative, allowing the retention of the network performance guarantees. Several experiments have been carried out and investigated to explain the deviation between the proposed method and the simulation. Time parameters for burst length and different buffer sizes have been considered. Experiments to confine the limits of the burst length with respect to the buffer size conclude that a minimum buffer size is necessary to achieve adequate cell contention. Note that propagation delay is a no dismiss limit for long distance and interactive communications, then small buffer must be used in order to minimise delay. Under previous premises, the convolution approach is the most accurate method used in bandwidth allocation. This method gives enough accuracy in both homogeneous and heterogeneous networks. But, the convolution approach has a considerable computation cost and a high number of accumulated calculations. To overcome this drawbacks, a new method of evaluation is analysed: the Enhanced Convolution Approach (ECA). In ECA, traffic is grouped in classes of identical parameters. By using the multinomial distribution function instead of the formula-based convolution, a partial state corresponding to each class of traffic is obtained. Finally, the global state probabilities are evaluated by multi-convolution of the partial results. This method avoids accumulated calculations and saves storage requirements, specially in complex scenarios. Sorting is the dominant factor for the formula-based convolution, whereas cost evaluation is the dominant factor for the enhanced convolution. A set of cut-off mechanisms are introduced to reduce the complexity of the ECA evaluation. The ECA also computes the CLR for each j-class of traffic (CLRj), an expression for the CLRj evaluation is also presented. We can conclude that by combining the ECA method with cut-off mechanisms, utilisation of ECA in real-time CAC environments as a single level scheme is always possible.
Resumo:
Competitive Dialogue (CD) is a new contract award procedure of the European Community (EC). It is set out in Article 29 of the 'Public Sector Directive' 2004/18/EC. Over the last decades, projects were becoming more and more complex, and the existing EC procedures were no longer suitable to procure those projects. The call for a new procedure resulted in CD. This paper describes how the Directive has been implemented into the laws of two member states: the UK and the Netherlands. In order to implement the Directive, both lawmakers have set up a new and distinct piece of legislation. In each case, large parts of the Directive’s content have been repeated ‘word for word’; only minor parts have been reworded and/or restructured. In the next part of the paper, the CD procedure is examined in different respects. First, an overview is given on the different EC contract award procedures (open, restricted, negotiated, CD) and awarding methods (lowest price and Most Economically Advantageous Tender, MEAT). Second, the applicability of CD is described: Among other limitations, CD can only be applied to public contracts for works, supplies, and services, and this scope of application is further restricted by the exclusion of certain contract types. One such exclusion concerns services concessions. This means that PPP contracts which are set up as services concessions cannot be awarded by CD. The last two parts of the paper pertain to the main features of the CD procedure – from ‘contract notice’ to ‘contract award’ – and the advantages and disadvantages of the procedure. One advantage is that the dialogue allows the complexity of the project to be disentangled and clarified. Other advantages are the stimulation of innovation and creativity. These advantages are set against the procedure’s disadvantages, which include high transaction costs and a perceived hindrance of innovation (due to an ambiguity between transparency and fair competition). It is concluded that all advantages and disadvantages are related to one of three elements: communication, competition, and/or structure of the procedure. Further research is needed to find out how these elements are related.
Resumo:
We present a stochastic approach for solving the quantum-kinetic equation introduced in Part I. A Monte Carlo method based on backward time evolution of the numerical trajectories is developed. The computational complexity and the stochastic error are investigated numerically. Variance reduction techniques are applied, which demonstrate a clear advantage with respect to the approaches based on symmetry transformation. Parallel implementation is realized on a GRID infrastructure.
Resumo:
An information processing paradigm in the brain is proposed, instantiated in an artificial neural network using biologically motivated temporal encoding. The network will locate within the external world stimulus, the target memory, defined by a specific pattern of micro-features. The proposed network is robust and efficient. Akin in operation to the swarm intelligence paradigm, stochastic diffusion search, it will find the best-fit to the memory with linear time complexity. information multiplexing enables neurons to process knowledge as 'tokens' rather than 'types'. The network illustrates possible emergence of cognitive processing from low level interactions such as memory retrieval based on partial matching. (C) 2007 Elsevier B.V. All rights reserved.
Resumo:
Both the (5,3) counter and (2,2,3) counter multiplication techniques are investigated for the efficiency of their operation speed and the viability of the architectures when implemented in a fast bipolar ECL technology. The implementation of the counters in series-gated ECL and threshold logic are contrasted for speed, noise immunity and complexity, and are critically compared with the fastest practical design of a full-adder. A novel circuit technique to overcome the problems of needing high fan-in input weights in threshold circuits through the use of negative weighted inputs is presented. The authors conclude that a (2,2,3) counter based array multiplier implemented in series-gated ECL should enable a significant increase in speed over conventional full adder based array multipliers.
Resumo:
In most cases, the cost of a control system increases based on its complexity. Proportional (P) controller is the simplest and most intuitive structure for the implementation of linear control systems. The difficulty to find the stability range of feedback systems with P controllers, using the Routh-Hurwitz criterion, increases with the order of the plant. For high order plants, the stability range cannot be easily obtained from the investigation of the coefficient signs in the first column of the Routh's array. A direct method for the determination of the stability range is presented. The method is easy to understand, to compute, and to offer the students a better comprehension on this subject. A program in MATLAB language, based on the proposed method, design examples, and class assessments, is provided in order to help the pedagogical issues. The method and the program enable the user to specify a decay rate and also extend to proportional-integral (PI), proportional-derivative (PD), and proportional-integral-derivative (PID) controllers.
Resumo:
The iterative quadratic maximum likelihood IQML and the method of direction estimation MODE are well known high resolution direction-of-arrival DOA estimation methods. Their solutions lead to an optimization problem with constraints. The usual linear constraint presents a poor performance for certain DOA values. This work proposes a new linear constraint applicable to both DOA methods and compare their performance with two others: unit norm and usual linear constraint. It is shown that the proposed alternative performs better than others constraints. The resulting computational complexity is also investigated.
Resumo:
The focus of the activities of the Economic Commission for Latin America and the Caribbean/Caribbean Development and Cooperation Committee (ECLAC/CDCC) secretariat during the 2006-2007 biennium continued to be on assistance to member governments of the subregion with policy-making and development strategies, especially on issues relevant to the promotion of the economic, social, and environmental dimensions of development in the Caribbean. The Subregional Headquarters for the Caribbean worked closely with member countries of the CDCC in an effort to ensure the relevance of outputs which would inform policy options. This involved the strengthening of partnerships with both regional and subregional institutions and relevant agencies of the United Nations system working in the Caribbean. A major decision was taken to refocus the operational aspects of the secretariat to ensure that they were relevant to the development goals of its members. This involved the introduction of a thematic approach to the work of the office. One of the changes resulting from this was the restructuring and renaming of the Caribbean Documentation Centre. The Caribbean Knowledge Management Centre (CKMC), as it is now known, has changed its emphasis from organizing and disseminating documents, and is now a more proactive partner in the research undertaken by staff and other users of the service. The CKMC manages the ECLAC website, the public face of the organization. Newsletters and all other documents, including Information and Communications Technology (ICT) profiles of selected countries, prepared by the secretariat, are now available online at the ECLAC/CDCC website www.eclacpos.org . The Caribbean Knowledge Management Portal was launched at a meeting of information specialists in St. Vincent and the Grenadines in 2007. In addition to reaching a wider public, this measure was introduced as a means of reducing the cost of printing or disseminating publications. In spite of the unusually high vacancy rate, at both the international and local levels, during the biennium, the subregional headquarters accomplished 98 per cent of the 119 outputs earmarked for the period. Using vacant positions to carry out the assignments was not an easy task, given the complexity in recruiting qualified and experienced persons for short periods. Nevertheless, consultancy services and short-term replacement staff greatly aided the delivery of these outputs. All the same, 35 work months remained unused during the biennium, leaving 301 work months to complete the outputs. In addition to the unoccupied positions, the work of the subprogramme was severely affected by the rising cost of regional and subregional travel which limited the ability of staff to network and interact with colleagues of member countries. This also hampered the outreach programme carried out mainly through ad hoc expert group meetings. In spite of these shortcomings, the period proved to be successful for the subprogramme as it engaged the attention of member countries in its work either through direct or indirect participation. Staff members completed 36 technical papers plus the reports of the meetings and workshops. A total of 523 persons, representing member countries, participated in the 18 intergovernmental and expert meetings convened by the secretariat in the 24-month period. In its effort to build technical capacity, the subprogramme convened 15 workshops/seminars which offered training for 446 persons.
Resumo:
Abstract Background The public health system of Brazil is structured by a network of increasing complexity, but the low resolution of emergency care at pre-hospital units and the lack of organization of patient flow overloaded the hospitals, mainly the ones of higher complexity. The knowledge of this phenomenon induced Ribeirão Preto to implement the Medical Regulation Office and the Mobile Emergency Attendance System. The objective of this study was to analyze the impact of these services on the gravity profile of non-traumatic afflictions in a University Hospital. Methods The study conducted a retrospective analysis of the medical records of 906 patients older than 13 years of age who entered the Emergency Care Unit of the Hospital of the University of São Paulo School of Medicine at Ribeirão Preto. All presented acute non-traumatic afflictions and were admitted to the Internal Medicine, Surgery or Neurology Departments during two study periods: May 1996 (prior to) and May 2001 (after the implementation of the Medical Regulation Office and Mobile Emergency Attendance System). Demographics and mortality risk levels calculated by Acute Physiology and Chronic Health Evaluation II (APACHE II) were determined. Results From 1996 to 2001, the mean age increased from 49 ± 0.9 to 52 ± 0.9 (P = 0.021), as did the percentage of co-morbidities, from 66.6 to 77.0 (P = 0.0001), the number of in-hospital complications from 260 to 284 (P = 0.0001), the mean calculated APACHE II mortality risk increased from 12.0 ± 0.5 to 14.8 ± 0.6 (P = 0.0008) and mortality rate from 6.1 to 12.2 (P = 0.002). The differences were more significant for patients admitted to the Internal Medicine Department. Conclusion The implementation of the Medical Regulation and Mobile Emergency Attendance System contributed to directing patients with higher gravity scores to the Emergency Care Unit, demonstrating the potential of these services for hierarchical structuring of pre-hospital networks and referrals.
Resumo:
[EN] The accuracy and performance of current variational optical ow methods have considerably increased during the last years. The complexity of these techniques is high and enough care has to be taken for the implementation. The aim of this work is to present a comprehensible implementation of recent variational optical flow methods. We start with an energy model that relies on brightness and gradient constancy terms and a ow-based smoothness term. We minimize this energy model and derive an e cient implicit numerical scheme. In the experimental results, we evaluate the accuracy and performance of this implementation with the Middlebury benchmark database. We show that it is a competitive solution with respect to current methods in the literature. In order to increase the performance, we use a simple strategy to parallelize the execution on multi-core processors.
Resumo:
Sustainable computer systems require some flexibility to adapt to environmental unpredictable changes. A solution lies in autonomous software agents which can adapt autonomously to their environments. Though autonomy allows agents to decide which behavior to adopt, a disadvantage is a lack of control, and as a side effect even untrustworthiness: we want to keep some control over such autonomous agents. How to control autonomous agents while respecting their autonomy? A solution is to regulate agents’ behavior by norms. The normative paradigm makes it possible to control autonomous agents while respecting their autonomy, limiting untrustworthiness and augmenting system compliance. It can also facilitate the design of the system, for example, by regulating the coordination among agents. However, an autonomous agent will follow norms or violate them in some conditions. What are the conditions in which a norm is binding upon an agent? While autonomy is regarded as the driving force behind the normative paradigm, cognitive agents provide a basis for modeling the bindingness of norms. In order to cope with the complexity of the modeling of cognitive agents and normative bindingness, we adopt an intentional stance. Since agents are embedded into a dynamic environment, things may not pass at the same instant. Accordingly, our cognitive model is extended to account for some temporal aspects. Special attention is given to the temporal peculiarities of the legal domain such as, among others, the time in force and the time in efficacy of provisions. Some types of normative modifications are also discussed in the framework. It is noteworthy that our temporal account of legal reasoning is integrated to our commonsense temporal account of cognition. As our intention is to build sustainable reasoning systems running unpredictable environment, we adopt a declarative representation of knowledge. A declarative representation of norms will make it easier to update their system representation, thus facilitating system maintenance; and to improve system transparency, thus easing system governance. Since agents are bounded and are embedded into unpredictable environments, and since conflicts may appear amongst mental states and norms, agent reasoning has to be defeasible, i.e. new pieces of information can invalidate formerly derivable conclusions. In this dissertation, our model is formalized into a non-monotonic logic, namely into a temporal modal defeasible logic, in order to account for the interactions between normative systems and software cognitive agents.
Resumo:
This thesis deals with the study of optimal control problems for the incompressible Magnetohydrodynamics (MHD) equations. Particular attention to these problems arises from several applications in science and engineering, such as fission nuclear reactors with liquid metal coolant and aluminum casting in metallurgy. In such applications it is of great interest to achieve the control on the fluid state variables through the action of the magnetic Lorentz force. In this thesis we investigate a class of boundary optimal control problems, in which the flow is controlled through the boundary conditions of the magnetic field. Due to their complexity, these problems present various challenges in the definition of an adequate solution approach, both from a theoretical and from a computational point of view. In this thesis we propose a new boundary control approach, based on lifting functions of the boundary conditions, which yields both theoretical and numerical advantages. With the introduction of lifting functions, boundary control problems can be formulated as extended distributed problems. We consider a systematic mathematical formulation of these problems in terms of the minimization of a cost functional constrained by the MHD equations. The existence of a solution to the flow equations and to the optimal control problem are shown. The Lagrange multiplier technique is used to derive an optimality system from which candidate solutions for the control problem can be obtained. In order to achieve the numerical solution of this system, a finite element approximation is considered for the discretization together with an appropriate gradient-type algorithm. A finite element object-oriented library has been developed to obtain a parallel and multigrid computational implementation of the optimality system based on a multiphysics approach. Numerical results of two- and three-dimensional computations show that a possible minimum for the control problem can be computed in a robust and accurate manner.
Resumo:
This thesis presents a CMOS Amplifier with High Common Mode rejection designed in UMC 130nm technology. The goal is to achieve a high amplification factor for a wide range of biological signals (with frequencies in the range of 10Hz-1KHz) and to reject the common-mode noise signal. It is here presented a Data Acquisition System, composed of a Delta-Sigma-like Modulator and an antenna, that is the core of a portable low-complexity radio system; the amplifier is designed in order to interface the data acquisition system with a sensor that acquires the electrical signal. The Modulator asynchronously acquires and samples human muscle activity, by sending a Quasi-Digital pattern that encodes the acquired signal. There is only a minor loss of information translating the muscle activity using this pattern, compared to an encoding technique which uses astandard digital signal via Impulse-Radio Ultra-Wide Band (IR-UWB). The biological signals, needed for Electromyographic analysis, have an amplitude of 10-100μV and need to be highly amplified and separated from the overwhelming 50mV common mode noise signal. Various tests of the firmness of the concept are presented, as well the proof that the design works even with different sensors, such as Radiation measurement for Dosimetry studies.
Resumo:
An important problem in computational biology is finding the longest common subsequence (LCS) of two nucleotide sequences. This paper examines the correctness and performance of a recently proposed parallel LCS algorithm that uses successor tables and pruning rules to construct a list of sets from which an LCS can be easily reconstructed. Counterexamples are given for two pruning rules that were given with the original algorithm. Because of these errors, performance measurements originally reported cannot be validated. The work presented here shows that speedup can be reliably achieved by an implementation in Unified Parallel C that runs on an Infiniband cluster. This performance is partly facilitated by exploiting the software cache of the MuPC runtime system. In addition, this implementation achieved speedup without bulk memory copy operations and the associated programming complexity of message passing.