908 resultados para Public interest based and private interest based systems
Resumo:
In January 1983 a group of US government, industry and university information specialists gathered at MIT to take stock of efforts to monitor, acquire, assess, and disseminate Japanese scientific and technical information (JSTI). It was agreed that these efforts were uncoordinated and poorly conceived, and that a clearer understanding of Japanese technical information systems and a clearer sense of its importance to end users was necessary. That meeting led to formal technology assessments, Congressinal hearings, and legislation; it also helped stimulate several private initiatives in JSTI provision. Four years later there exist better coordinated and better conceived JSTI programs in both the public and private sectors, but there remains much room for improvement. This paper will recount their development and assess future directions.
Resumo:
The ‘Public interest’, even if viewed with ambiguity or scepticism, has been one of the primary means by which various professional roles of planners have been justified. Many objections to the concept have been advanced by writers in planning academia. Notwithstanding these, ‘public interest’ continues to be mobilised, to justify, defend or argue for planning interventions and reforms. This has led to arguments that planning will have to adopt and recognise some form of public interest in practice to legitimise itself.. This paper explores current debates around public interest and social justice and advances a vision of the public interest informed by complexity theory. The empirical context of the paper is the poverty alleviation programme, the Kudumbashree project in Kerala, India.
Resumo:
Ruminant production is a vital part of food industry but it raises environmental concerns, partly due to the associated methane outputs. Efficient methane mitigation and estimation of emissions from ruminants requires accurate prediction tools. Equations recommended by international organizations or scientific studies have been developed with animals fed conserved forages and concentrates and may be used with caution for grazing cattle. The aim of the current study was to develop prediction equations with animals fed fresh grass in order to be more suitable to pasture-based systems and for animals at lower feeding levels. A study with 25 nonpregnant nonlactating cows fed solely fresh-cut grass at maintenance energy level was performed over two consecutive grazing seasons. Grass of broad feeding quality, due to contrasting harvest dates, maturity, fertilisation and grass varieties, from eight swards was offered. Cows were offered the experimental diets for at least 2 weeks before housed in calorimetric chambers over 3 consecutive days with feed intake measurements and total urine and faeces collections performed daily. Methane emissions were measured over the last 2 days. Prediction models were developed from 100 3-day averaged records. Internal validation of these equations, and those recommended in literature, was performed. The existing in greenhouse gas inventories models under-estimated methane emissions from animals fed fresh-cut grass at maintenance while the new models, using the same predictors, improved prediction accuracy. Error in methane outputs prediction was decreased when grass nutrient, metabolisable energy and digestible organic matter concentrations were added as predictors to equations already containing dry matter or energy intakes, possibly because they explain feed digestibility and the type of energy-supplying nutrients more efficiently. Predictions based on readily available farm-level data, such as liveweight and grass nutrient concentrations were also generated and performed satisfactorily. New models may be recommended for predictions of methane emissions from grazing cattle at maintenance or low feeding levels.
Resumo:
Component-based software engineering has recently emerged as a promising solution to the development of system-level software. Unfortunately, current approaches are limited to specific platforms and domains. This lack of generality is particularly problematic as it prevents knowledge sharing and generally drives development costs up. In the past, we have developed a generic approach to component-based software engineering for system-level software called OpenCom. In this paper, we present OpenComL an instantiation of OpenCom to Linux environments and show how it can be profiled to meet a range of system-level software in Linux environments. For this, we demonstrate its application to constructing a programmable router platform and a middleware for parallel environments.
Resumo:
Of the ways in which agent behaviour can be regulated in a multiagent system, electronic contracting – based on explicit representation of different parties' responsibilities, and the agreement of all parties to them – has significant potential for modern industrial applications. Based on this assumption, the CONTRACT project aims to develop and apply electronic contracting and contract-based monitoring and verification techniques in real world applications. This paper presents results from the initial phase of the project, which focused on requirements solicitation and analysis. Specifically, we survey four use cases from diverse industrial applications, examine how they can benefit from an agent-based electronic contracting infrastructure and outline the technical requirements that would be placed on such an infrastructure. We present the designed CONTRACT architecture and describe how it may fulfil these requirements. In addition to motivating our work on the contract-based infrastructure, the paper aims to provide a much needed community resource in terms of use case themselves and to provide a clear commercial context for the development of work on contract-based system.
Resumo:
Electronic applications are currently developed under the reuse-based paradigm. This design methodology presents several advantages for the reduction of the design complexity, but brings new challenges for the test of the final circuit. The access to embedded cores, the integration of several test methods, and the optimization of the several cost factors are just a few of the several problems that need to be tackled during test planning. Within this context, this thesis proposes two test planning approaches that aim at reducing the test costs of a core-based system by means of hardware reuse and integration of the test planning into the design flow. The first approach considers systems whose cores are connected directly or through a functional bus. The test planning method consists of a comprehensive model that includes the definition of a multi-mode access mechanism inside the chip and a search algorithm for the exploration of the design space. The access mechanism model considers the reuse of functional connections as well as partial test buses, cores transparency, and other bypass modes. The test schedule is defined in conjunction with the access mechanism so that good trade-offs among the costs of pins, area, and test time can be sought. Furthermore, system power constraints are also considered. This expansion of concerns makes it possible an efficient, yet fine-grained search, in the huge design space of a reuse-based environment. Experimental results clearly show the variety of trade-offs that can be explored using the proposed model, and its effectiveness on optimizing the system test plan. Networks-on-chip are likely to become the main communication platform of systemson- chip. Thus, the second approach presented in this work proposes the reuse of the on-chip network for the test of the cores embedded into the systems that use this communication platform. A power-aware test scheduling algorithm aiming at exploiting the network characteristics to minimize the system test time is presented. The reuse strategy is evaluated considering a number of system configurations, such as different positions of the cores in the network, power consumption constraints and number of interfaces with the tester. Experimental results show that the parallelization capability of the network can be exploited to reduce the system test time, whereas area and pin overhead are strongly minimized. In this manuscript, the main problems of the test of core-based systems are firstly identified and the current solutions are discussed. The problems being tackled by this thesis are then listed and the test planning approaches are detailed. Both test planning techniques are validated for the recently released ITC’02 SoC Test Benchmarks, and further compared to other test planning methods of the literature. This comparison confirms the efficiency of the proposed methods.
Resumo:
The questlon of the crowding-out of private !nvestment by public expenditure, public investment in particular , ln the Brazilian economy has been discussed more in ideological terrns than on empirical grounds. The present paper tries to avoid the limitation of previous studies by estlmatlng an equation for private investment whlch makes it possible to evaluate the effect of economic policies on prlvate investment. The private lnvestment equation was deduced modifylng the optimal flexible accelerator medel (OFAM) incorporating some channels through which public expendlture influences privateinvestment. The OFAM consists in adding adjustment costs to the neoclassical theory of investrnent. The investment fuction deduced is quite general and has the following explanatory variables: relative prices (user cost of capitaljimput prices ratios), real interest rates, real product, public expenditures and lagged private stock of capital. The model was estimated for private manufacturing industry data. The procedure adopted in estimating the model was to begin with a model as general as possible and apply restrictions to the model ' s parameters and test their statistical significance. A complete diagnostic testing was also made in order to test the stability of estirnated equations. This procedure avoids ' the shortcomings of estimating a model with a apriori restrictions on its parameters , which may lead to model misspecification. The main findings of the present study were: the increase in public expenditure, at least in the long run, has in general a positive expectation effect on private investment greater than its crowding-out effect on priva te investment owing to the simultaneous rise in interst rates; a change in economlc policy, such as that one of Geisel administration, may have an important effect on private lnvestment; and reI ative prices are relevant in determining the leveI of desired stock of capital and private investrnent.
Resumo:
Atualmente, há diferentes definições de implicações fuzzy aceitas na literatura. Do ponto de vista teórico, esta falta de consenso demonstra que há discordâncias sobre o real significado de "implicação lógica" nos contextos Booleano e fuzzy. Do ponto de vista prático, isso gera dúvidas a respeito de quais "operadores de implicação" os engenheiros de software devem considerar para implementar um Sistema Baseado em Regras Fuzzy (SBRF). Uma escolha ruim destes operadores pode implicar em SBRF's com menor acurácia e menos apropriados aos seus domínios de aplicação. Uma forma de contornar esta situação e conhecer melhor os conectivos lógicos fuzzy. Para isso se faz necessário saber quais propriedades tais conectivos podem satisfazer. Portanto, a m de corroborar com o significado de implicação fuzzy e corroborar com a implementação de SBRF's mais apropriados, várias leis Booleanas têm sido generalizadas e estudadas como equações ou inequações nas lógicas fuzzy. Tais generalizações são chamadas de leis Boolean-like e elas não são comumente válidas em qualquer semântica fuzzy. Neste cenário, esta dissertação apresenta uma investigação sobre as condições suficientes e necessárias nas quais três leis Booleanlike like — y ≤ I(x, y), I(x, I(y, x)) = 1 e I(x, I(y, z)) = I(I(x, y), I(x, z)) — se mantém válidas no contexto fuzzy, considerando seis classes de implicações fuzzy e implicações geradas por automorfismos. Além disso, ainda no intuito de implementar SBRF's mais apropriados, propomos uma extensão para os mesmos
Resumo:
The aim of this thesis was to apply the techniques of the atomic force microscope (AFM) to biological samples, namely lipid-based systems. To this end several systems with biological relevance based on self-assembly, such as a solid-supported membrane (SSM) based sensor for transport proteins, a bilayer of the natural lipid extract from an archaebacterium, and synaptic vesicles, were investigated by the AFM. For the characterization of transport proteins with SSM-sensors proteoliposomes are adsorbed that contain the analyte (transport protein). However the forces governing bilayer-bilayer interactions in solution should be repulsive under physiological conditions. I investigated the nature of the interaction forces with AFM force spectroscopy by mimicking the adsorbing proteoliposome with a cantilever tip, which was functionalized with charged alkane thiols. The nature of the interaction is indeed repulsive, but the lipid layers assemble in stacks on the SSM, which expose their unfavourable edges to the medium. I propose a model by which the proteoliposomes interact with these edges and fuse with the bilayer stacks, so forming a uniform layer on the SSM. Furthermore I characterized freestanding bilayers from a synthetic phospholipid with a phase transition at 41°C and from a natural lipid extract of the archaebacterium Methanococcus jannaschii. The synthetic lipid is in the gel-phase at room temperature and changes to the fluid phase when heated to 50°C. The bilayer of the lipid extract shows no phase transition when heated from room temperature to the growth temperature (~ 50°C) of the archeon. Synaptic vesicles are the containers of neurotransmitter in nerve cells and the synapsins are a family of extrinsic membrane proteins, that are associated with them, and believed to control the synaptic vesicle cycle. I used AFM imaging and force spectroscopy together with dynamic light scattering to investigate the influence of synapsin I on synaptic vesicles. To this end I used native, untreated synaptic vesicles and compared them to synapsin-depleted synaptic vesicles. Synapsin-depleted vesicles were larger in size and showed a higher tendency to aggregate compared to native vesicles, although their mechanical properties were alike. I also measured the aggregation kinetics of synaptic vesicles induced by synapsin I and found that the addition of synapsin I promotes a rapid aggregation of synaptic vesicles. The data indicate that synapsin I affects the stability and the aggregation state of synaptic vesicles, and confirm the physiological role of synapsins in the assembly and regulation of synaptic vesicle pools within nerve cells.
Resumo:
This paper reviews the relationship between public sector investment and private sector investment through government expenditures financed by government bonds in the Japanese economy. This study hypothesizes that deficit financing by bond issues does not crowd out private sector investment, and this finance method may crowd in. Thus the government increases bond issues and sells them in the domestic and international financial markets. This method does not affect interest rates because they are insensitive to government expenditures and they depend on interest rates levels in the international financial market more than in the domestic financial market because of globalization and integration among financial markets.
Resumo:
The confluence of three-dimensional (3D) virtual worlds with social networks imposes on software agents, in addition to conversational functions, the same behaviours as those common to human-driven avatars. In this paper, we explore the possibilities of the use of metabots (metaverse robots) with motion capabilities in complex virtual 3D worlds and we put forward a learning model based on the techniques used in evolutionary computation for optimizing the fuzzy controllers which will subsequently be used by metabots for moving around a virtual environment.