803 resultados para common agent architecture design
Resumo:
This paper argues the euro zone requires a government banker that manages the bond market and helps finance country budget deficits. The euro solved Europe’s problem of exchange rate speculation by creating a unified currency managed by a single central bank, but in doing so it replaced the exchange rate speculation problem with bond market speculation. Remedying this requires a central bank that acts as government banker and maintains bond interest rates at sustainable levels. Because the euro is a monetary union, this must be done in a way that both avoids favoring individual countries and avoids creating incentives for irresponsible country fiscal policy that leads to “bail-outs”. The paper argues this can be accomplished via a European Public Finance Authority (EPFA) that issues public debt which the European Central Bank (ECB) is allowed to trade. The debate over the euro’s financial architecture has significant political implications. The current neoliberal inspired architecture, which imposes a complete separation between the central bank and public finances, puts governments under continuous financial pressures. That will make it difficult to maintain the European social democratic welfare state. This gives a political reason for reforming the euro and creating an EPFA that supplements the economic case for reform.
Resumo:
Tests on printed circuit boards and integrated circuits are widely used in industry,resulting in reduced design time and cost of a project. The functional and connectivity tests in this type of circuits soon began to be a concern for the manufacturers, leading to research for solutions that would allow a reliable, quick, cheap and universal solution. Initially, using test schemes were based on a set of needles that was connected to inputs and outputs of the integrated circuit board (bed-of-nails), to which signals were applied, in order to verify whether the circuit was according to the specifications and could be assembled in the production line. With the development of projects, circuit miniaturization, improvement of the production processes, improvement of the materials used, as well as the increase in the number of circuits, it was necessary to search for another solution. Thus Boundary-Scan Testing was developed which operates on the border of integrated circuits and allows testing the connectivity of the input and the output ports of a circuit. The Boundary-Scan Testing method was converted into a standard, in 1990, by the IEEE organization, being known as the IEEE 1149.1 Standard. Since then a large number of manufacturers have adopted this standard in their products. This master thesis has, as main objective: the design of Boundary-Scan Testing in an image sensor in CMOS technology, analyzing the standard requirements, the process used in the prototype production, developing the design and layout of Boundary-Scan and analyzing obtained results after production. Chapter 1 presents briefly the evolution of testing procedures used in industry, developments and applications of image sensors and the motivation for the use of architecture Boundary-Scan Testing. Chapter 2 explores the fundamentals of Boundary-Scan Testing and image sensors, starting with the Boundary-Scan architecture defined in the Standard, where functional blocks are analyzed. This understanding is necessary to implement the design on an image sensor. It also explains the architecture of image sensors currently used, focusing on sensors with a large number of inputs and outputs.Chapter 3 describes the design of the Boundary-Scan implemented and starts to analyse the design and functions of the prototype, the used software, the designs and simulations of the functional blocks of the Boundary-Scan implemented. Chapter 4 presents the layout process used based on the design developed on chapter 3, describing the software used for this purpose, the planning of the layout location (floorplan) and its dimensions, the layout of individual blocks, checks in terms of layout rules, the comparison with the final design and finally the simulation. Chapter 5 describes how the functional tests were performed to verify the design compliancy with the specifications of Standard IEEE 1149.1. These tests were focused on the application of signals to input and output ports of the produced prototype. Chapter 6 presents the conclusions that were taken throughout the execution of the work.
Resumo:
This paper examines, through case studies, the organization of the production process of architectural projects in architecture offices in the city of Natal, specifically in relation to building projects. The specifics of the design process in architecture, the production of the project in a professional field in Natal, are studied in light of theories of design and its production process. The survey, in its different phases, was conducted between March 2010 and September 2012 and aimed to identify, understand, and analyze comparatively, by mapping the design process, the organization of production of building projects in two offices in Natal, checking as well the relationships of their agents during the process. The project was based on desk research and exploration, adopting, for both, data collection tools such as forms, questionnaires, and interviews. With the specific aim of mapping the design process, we adopted a technique that allows obtaining the information directly from employee agents involved in the production process. The technique consisted of registering information by completing daily, during or at the end of the workday, an individual virtual agenda, in which all agent collaborators described the tasks performed. The data collected allowed for the identification of the organizational structure of the office, its hierarchy, the responsibilities of agents, as well as the tasks performed by them during the two months of monitoring at each office. The research findings were based on analyses of data collected in the two offices and on comparative studies between the results of these analyses. The end result was a diagnostic evaluation that considered the level of organization and elaborated this perspective, as well as proposed solutions aimed at improving both the organization of the process and the relationships between the agents under the lens analyzed
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
H-infinity control design for time-delay linear systems: a rational transfer function based approach
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Based on the genetic analysis of the phytopathogen Xylella fastidiosa genome, five media with defined composition were developed and the growth abilities of this fastidious prokaryote were evaluated in liquid media and on solid plates. All media had a common salt composition and included the same amounts of glucose and vitamins but differed in their amino acid content. XDM1 medium contained amino acids threonine, serine, glycine, alanine, aspartic acid and glutamic acid, for which complete degradation pathways occur in X fastidiosa; XDM2 included serine and methionine, amino acids for which biosynthetic enzymes are absent, plus asparagine and glutamine, which are abundant in the xylem sap; XDM3 had the same composition as XDM2 but with asparagine replaced by aspartic acid due to the presence of complete degradation pathway for aspartic acid; XDM4 was a minimal medium with glutamine as a sole nitrogen source; XDM5 had the same composition as XDM4, plus methionine. The liquid and solidified XDM2 and XDM3 media were the most effective for the growth of X. fastidiosa. This work opens the opportunity for the in silico design of bacterial defined media once their genome is sequenced. (C) 2002 Federation of European Microbiological Societies. Published by Elsevier B.V. B.V. All rights reserved.
Resumo:
O objetivo deste trabalho foi a caracterização genética de quatro novas estirpes de Rhizobium e a avaliação de sua capacidade de fixação de N2 e nodulação, comparadas a estirpes comerciais e à população nativa de rizóbios de um Latossolo Vermelho. Dois experimentos foram conduzidos em blocos ao acaso, em casa de vegetação. No primeiro experimento, conduzido em tubetes com vermiculita, avaliaram-se a nodulação e a capacidade de fixação das novas estirpes, em comparação com as estirpes comerciais CIAT-899 e PRF-81 e com a população nativa do solo. Das colônias puras isoladas, extraiu-se o DNA genômico e realizou-se o seqüenciamento do espaço intergênico, para a caracterização genética das estirpes e da população nativa de rizóbios. O segundo experimento foi realizado em vasos com solo, para determinação da produtividade e da nodulação do feijoeiro, cultivar Pérola, com o uso das estirpes isoladamente ou em mistura com a PRF-81. A população nativa do solo foi identificada como Rhizobium sp. e se mostrou ineficiente na fixação de nitrogênio. Foram encontradas três espécies de Rhizobium entre as quatro novas estirpes. As estirpes LBMP-4BR e LBMP-12BR estão entre as que têm maior capacidade de nodulação e fixação de N2, e apresentam respostas diferenciadas quando misturadas à PRF-81.
Resumo:
O presente estudo avaliou a digestibilidade aparente da proteína e da energia de ingredientes (farelo de soja, farinha de peixe, farelo de trigo e milho) por juvenis de apaiari (Astronotus ocellatus) usando dois diferentes intervalos de coleta (30 min. e 12h). Os 160 juvenis de apaiari utilizados (22,37 ± 3,06 g de peso corporal) foram divididos em quatro tanques rede plásticos e cilíndricos, cada um colocado em um tanque de alimentação de 1.000 L. O experimento foi inteiramente casualizado em esquema fatorial 2 x 4 (2 intervalos de coleta de fezes e 4 ingredientes foram) com quatro repetições. Os testes estatísticos não detectaram efeito da interação entre o intervalo de coleta e tipo de ingrediente nos coeficientes de digestibilidade. O intervalo de coleta não afetou a digestibilidade da proteína e da energia. As características físicas das fezes dos juvenis de apaiari aparentemente as tornam menos sensíveis à perda de nutrientes por lixiviação, permitindo intervalos de coleta maiores. A digestibilidade da proteína dos ingredientes avaliados foi semelhante, mostrando que a digestibilidade aparente de ingredientes animais e vegetais por juvenis de apaiari é eficiente. Os coeficientes de digestibilidade da energia foram maiores para a farinha de peixe e o farelo de soja comparado a farelo de trigo e milho. Ingredientes ricos em carboidratos (farelo de trigo e milho) apresentaram os piores coeficientes de digestibilidade da energia e, portanto, não são usados eficientemente pelos juvenis de apaiari.
Resumo:
Simulations based on cognitively rich agents can become a very intensive computing task, especially when the simulated environment represents a complex system. This situation becomes worse when time constraints are present. This kind of simulations would benefit from a mechanism that improves the way agents perceive and react to changes in these types of environments. In other worlds, an approach to improve the efficiency (performance and accuracy) in the decision process of autonomous agents in a simulation would be useful. In complex environments, and full of variables, it is possible that not every information available to the agent is necessary for its decision-making process, depending indeed, on the task being performed. Then, the agent would need to filter the coming perceptions in the same as we do with our attentions focus. By using a focus of attention, only the information that really matters to the agent running context are perceived (cognitively processed), which can improve the decision making process. The architecture proposed herein presents a structure for cognitive agents divided into two parts: 1) the main part contains the reasoning / planning process, knowledge and affective state of the agent, and 2) a set of behaviors that are triggered by planning in order to achieve the agent s goals. Each of these behaviors has a runtime dynamically adjustable focus of attention, adjusted according to the variation of the agent s affective state. The focus of each behavior is divided into a qualitative focus, which is responsible for the quality of the perceived data, and a quantitative focus, which is responsible for the quantity of the perceived data. Thus, the behavior will be able to filter the information sent by the agent sensors, and build a list of perceived elements containing only the information necessary to the agent, according to the context of the behavior that is currently running. Based on the human attention focus, the agent is also dotted of a affective state. The agent s affective state is based on theories of human emotion, mood and personality. This model serves as a basis for the mechanism of continuous adjustment of the agent s attention focus, both the qualitative and the quantative focus. With this mechanism, the agent can adjust its focus of attention during the execution of the behavior, in order to become more efficient in the face of environmental changes. The proposed architecture can be used in a very flexibly way. The focus of attention can work in a fixed way (neither the qualitative focus nor the quantitaive focus one changes), as well as using different combinations for the qualitative and quantitative foci variation. The architecture was built on a platform for BDI agents, but its design allows it to be used in any other type of agents, since the implementation is made only in the perception level layer of the agent. In order to evaluate the contribution proposed in this work, an extensive series of experiments were conducted on an agent-based simulation over a fire-growing scenario. In the simulations, the agents using the architecture proposed in this work are compared with similar agents (with the same reasoning model), but able to process all the information sent by the environment. Intuitively, it is expected that the omniscient agent would be more efficient, since they can handle all the possible option before taking a decision. However, the experiments showed that attention-focus based agents can be as efficient as the omniscient ones, with the advantage of being able to solve the same problems in a significantly reduced time. Thus, the experiments indicate the efficiency of the proposed architecture
Resumo:
A.C.P. Rodrigues-Costa, D. Martins, N.V. Costa, and M.R.R. Pereira. 2011. Spray deposition on weeds of common bean crops. Cien. Inv. Agr. 38(3): 357-365. Weed control failure in common bean (Phaseolus vulgaris L.) production may be related to inappropriate herbicide application techniques. The purpose of this study, therefore, was to evaluate the amount of spray solution deposition that occurred on the weeds, Bidens pilosa L. and Brachiaria plantaginea (Link) Hitch., both within and between rows of common beans. The research was arranged in a randomized block design with four replications. The following 6 spray nozzles were used: flat fan nozzles XR 110015 VS (150 L ha(-1)) and XR 11002 VS (200 L ha(-1)); cone nozzles TX VK 6 (150 L ha(-1)) and TX VK 8 (200 L ha(-1)); and twin flat fan nozzles TJ60 11002 VS (150 L ha(-1)) and TJ60 11002 VS (200 L ha-1). The results showed that the loss of the spray solution on the soil occurred mainly within the bean rows and with a high intensity when using a nozzle spraying 200 L ha(-1). At 30 days after sowing, the TX (150 L ha(-1)) nozzle was the only nozzle that promoted deposits of less than 210.0 mu L g(-1) of dry mass. The spray nozzles showed a good performance in the deposition of the spray solution on the weeds that occurred both within and between the rows. However, for both species there was great variation in individual deposits depending on their location in relationship to the plants.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
All around the world, naturally occurring hydrocarbon deposits, consisting of oil and gas contained within rocks called reservoir rocks , generally sandstone or carbonate exists. These deposits are in varying conditions of pressure and depth from a few hundred to several thousand meters. In general, shallow reservoirs have greater tendency to fracture, since they have low fracture gradient, ie fractures are formed even with relatively low hydrostatic columns of fluid. These low fracture gradient areas are particularly common in onshore areas, like the Rio Grande do Norte basin. During a well drilling, one of the most favorable phases for the occurrence of fractures is during cementing, since the cement slurry used can have greater densities than the maximum allowed by the rock structure. Furthermore, in areas which are already naturally fractured, the use of regular cement slurries causes fluid loss into the formation, which may give rise to failures cementations and formation damages. Commercially, there are alternatives to the development of lightweight cement slurries, but these fail either because of their enormous cost, or because the cement properties were not good enough for most general applications, being restricted to each transaction for which the cement paste was made, or both reasons. In this work a statistical design was made to determine the influence of three variables, defined as the calcium chloride concentration, vermiculite concentration and nanosilica concentration in the various properties of the cement. The use of vermiculite, a low density ore present in large amounts in northeastern Brazil, as extensor for cementing slurries, enabled the production of stable cements, with high water/cement ratio, excellent rheological properties and low densities, which were set at 12.5 lb / gal, despite the fact that lower densities could be achieved. It is also seen that the calcium chloride is very useful as gelling and thickening agent, and their use in combination with nanosilica has a great effect on gel strength of the cement. Hydrothermal Stability studies showed that the pastes were stable in these conditions, and mechanical resistance tests showed values of the order of up to 10 MPa
Resumo:
This work shows the design, simulation, and analysis of two optical interconnection networks for a Dataflow parallel computer architecture. To verify the optical interconnection network performance on the Dataflow architecture, we have analyzed the load balancing among the processors during the parallel programs executions. The load balancing is a very important parameter because it is directly associated to the dataflow parallelism degree. This article proves that optical interconnection networks designed with simple optical devices can provide efficiently the dataflow requirements of a high performance communication system.
Resumo:
The main goal in this work is to conduct a quantitative analysis of the mechanical stir casting process for obtaining particulate metal matrix composites. A combined route of stirring at semi-solid state followed by stirring at liquid state is proposed. A fractional factorial design was developed to investigate the influence and interactions of factors as: time, rotation, initial fraction and particle size, on the incorporated fraction. The best incorporations were obtained with all factors at high levels, as well as that very long stirring periods have no strong influence being particle size and rotation the most important factors on the incorporated fraction. Particle wetting occurs during stirring at semisolid state, highlighting the importance of the interactions between particles and the alloy globularized phase. The role of the alloying element Mg as a wettability-promoting agent is discussed. The shear forces resulting from the stirring system is emphasized and understood as the effect of rotation itself added to the propeller blade geometry.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)