930 resultados para Building Systems
Resumo:
Within the information systems field, the task of conceptual modeling involves building a representation of selected phenomena in some domain. High-quality conceptual-modeling work is important because it facilitates early detection and correction of system development errors. It also plays an increasingly important role in activities like business process reengineering and documentation of best-practice data and process models in enterprise resource planning systems. Yet little research has been undertaken on many aspects of conceptual modeling. In this paper, we propose a framework to motivate research that addresses the following fundamental question: How can we model the world to better facilitate our developing, implementing, using, and maintaining more valuable information systems? The framework comprises four elements: conceptual-modeling grammars, conceptual-modeling methods, conceptual-modeling scripts, and conceptual-modeling contexts. We provide examples of the types of research that have already been undertaken on each element and illustrate research opportunities that exist.
Resumo:
Poultry can be managed under different feeding systems, depending on the husbandry skills and the feed available. These systems include the following: (1) a complete dry feed offered as a mash ad libitum; (2) the same feed offered as pellets or crumbles ad libitum; (3) a complete feed with added whole grain; (4) a complete wet feed given once or twice a day; (5) a complete feed offered on a restricted basis; (6) choice feeding. Of all these, an interesting alternative to offering complete diets is choice feeding which can be applied on both a small or large commercial scale. Under choice feeding or free-choice feeding birds are usually offered a choice between three types of feedstuffs: (a) an energy source (e.g. maize, rice bran, sorghum or wheat); (b) a protein source (e.g. soyabean meal, meat meal, fish meal or coconut meal) plus vitamins and minerals and (c), in the case of laying hens, calcium in granular form (i.e. oyster-shell grit). This system differs from the modern commercial practice of offering a complete diet comprising energy and protein sources, ground and mixed together. Under the complete diet system, birds are mainly only able to exercise their appetite for energy. When the environmental temperature varies, the birds either over- or under-consume protein and calcium. The basic principle behind practising choice feeding with laying hens is that individual hens are able to select from the various feed ingredients on offer and compose their own diet, according to their actual needs and production capacity. A choice-feeding system is of particular importance to small poultry producers in developing countries, such as Indonesia, because it can substantially reduce the cost of feed. The system is flexible and can be constructed in such a way that the various needs of a flock of different breeds, including village chickens, under different climates can be met. The system also offers a more effective way to use home-produced grain, such as maize, and by-products, such as rice bran, in developing countries. Because oyster-shell grit is readily available in developing countries at lower cost than limestone, the use of cheaper oyster-shell grit can further benefit small-holders in these countries. These benefits apart, simpler equipment suffices when designing and building a feed mixer on the farm, and transport costs are lower. If whole (unground) grain is used, the intake of which is accompanied by increased efficiency of feed utilisation, the costs of grinding, mixing and many of the handling procedures associated with mash and pellet preparation are eliminated. The choice feedstuffs can all be offered in the current feed distribution systems, either by mixing the ingredients first or by using a bulk bin divided into three compartments.
Resumo:
The ability to foresee how behaviour of a system arises from the interaction of its components over time - i.e. its dynamic complexity – is seen an important ability to take effective decisions in our turbulent world. Dynamic complexity emerges frequently from interrelated simple structures, such as stocks and flows, feedbacks and delays (Forrester, 1961). Common sense assumes an intuitive understanding of their dynamic behaviour. However, recent researches have pointed to a persistent and systematic error in people understanding of those building blocks of complex systems. This paper describes an empirical study concerning the native ability to understand systems thinking concepts. Two different groups - one, academic, the other, professional – submitted to four tasks, proposed by Sweeney and Sterman (2000) and Sterman (2002). The results confirm a poor intuitive understanding of the basic systems concepts, even when subjects have background in mathematics and sciences.
Resumo:
The ability to foresee how behaviour of a system arises from the interaction of its components over time - i.e. its dynamic complexity – is seen an important ability to take effective decisions in our turbulent world. Dynamic complexity emerges frequently from interrelated simple structures, such as stocks and flows, feedbacks and delays (Forrester, 1961). Common sense assumes an intuitive understanding of their dynamic behaviour. However, recent researches have pointed to a persistent and systematic error in people understanding of those building blocks of complex systems. This paper describes an empirical study concerning the native ability to understand systems thinking concepts. Two different groups - one, academic, the other, professional – submitted to four tasks, proposed by Sweeney and Sterman (2000) and Sterman (2002). The results confirm a poor intuitive understanding of the basic systems concepts, even when subjects have background in mathematics and sciences.
Resumo:
The adoption of faster modes of transportation (mainly the private car) has changed profoundly the spatial organisation of cities. The increase in distance covered due to increased speed of travel and to urban sprawl leads to an increase in energy consumption, being the transportation sector a huge consumer responsible for 61.5% of total world oil consumption and a global final energy consumption of 31.6% in EU-27 (2007). Due to unsustainable transportation conditions, many cities suffer from congestion and various other traffic problems. Such situations get worse with solutions mostly seen in the development of new infrastructure for motorized modes of transportation, and construction of car parking structures. The bicycle, considered the most efficient among all modes of transportation including walking, is a travel mode that can be adopted in most cities contributing for urban sustainability given the associated environmental, economic and social advantages. In many nations a large number of policy initiatives have focused on discouraging the use of private cars, encouraging the use of sustainable modes of transportation, like public transportation and other forms such as bicycling. Given the importance of developing initiatives that favour the use of bicycle as an urban transportation mode, an analysis of city suitability, including distances and slopes of street network, is crucial in order to help decision-makers to plan the city for bicycle. In this research Geographical Information Systems (GIS) technology was used for this purpose and some results are presented concerning the city of Coimbra.
Resumo:
We are working on the confluence of knowledge management, organizational memory and emergent knowledge with the lens of complex adaptive systems. In order to be fundamentally sustainable organizations search for an adaptive need for managing ambidexterity of day-to-day work and innovation. An organization is an entity of a systemic nature, composed of groups of people who interact to achieve common objectives, making it necessary to capture, store and share interactions knowledge with the organization, this knowledge can be generated in intra-organizational or inter-organizational level. The organizations have organizational memory of knowledge of supported on the Information technology and systems. Each organization, especially in times of uncertainty and radical changes, to meet the demands of the environment, needs timely and sized knowledge on the basis of tacit and explicit. This sizing is a learning process resulting from the interaction that emerges from the relationship between the tacit and explicit knowledge and which we are framing within an approach of Complex Adaptive Systems. The use of complex adaptive systems for building the emerging interdependent relationship, will produce emergent knowledge that will improve the organization unique developing.
Resumo:
This paper proposes a new methodology to reduce the probability of occurring states that cause load curtailment, while minimizing the involved costs to achieve that reduction. The methodology is supported by a hybrid method based on Fuzzy Set and Monte Carlo Simulation to catch both randomness and fuzziness of component outage parameters of transmission power system. The novelty of this research work consists in proposing two fundamentals approaches: 1) a global steady approach which deals with building the model of a faulted transmission power system aiming at minimizing the unavailability corresponding to each faulted component in transmission power system. This, results in the minimal global cost investment for the faulted components in a system states sample of the transmission network; 2) a dynamic iterative approach that checks individually the investment’s effect on the transmission network. A case study using the Reliability Test System (RTS) 1996 IEEE 24 Buses is presented to illustrate in detail the application of the proposed methodology.
Resumo:
A number of characteristics are boosting the eagerness of extending Ethernet to also cover factory-floor distributed real-time applications. Full-duplex links, non-blocking and priority-based switching, bandwidth availability, just to mention a few, are characteristics upon which that eagerness is building up. But, will Ethernet technologies really manage to replace traditional Fieldbus networks? To this question, Fieldbus fundamentalists often argue that the two technologies are not comparable. In fact, Ethernet technology, by itself, does not include features above the lower layers of the OSI communication model. Where are the higher layers that permit building real industrial applications? And, taking for free that they are available, what is the impact of those protocols, mechanisms and application models on the overall performance of Ethernetbased distributed factory-floor applications? In this paper we provide some contributions that may pave the way towards providing some reasonable answers to these issues.
Resumo:
A number of characteristics are boosting the eagerness of extending Ethernet to also cover factory-floor distributed real-time applications. Full-duplex links, non-blocking and priority-based switching, bandwidth availability, just to mention a few, are characteristics upon which that eagerness is building up. But, will Ethernet technologies really manage to replace traditional Fieldbus networks? Ethernet technology, by itself, does not include features above the lower layers of the OSI communication model. In the past few years, it is particularly significant the considerable amount of work that has been devoted to the timing analysis of Ethernet-based technologies. It happens, however, that the majority of those works are restricted to the analysis of sub-sets of the overall computing and communication system, thus without addressing timeliness at a holistic level. To this end, we are addressing a few inter-linked research topics with the purpose of setting a framework for the development of tools suitable to extract temporal properties of Commercial-Off-The-Shelf (COTS) Ethernet-based factory-floor distributed systems. This framework is being applied to a specific COTS technology, Ethernet/IP. In this paper, we reason about the modelling and simulation of Ethernet/IP-based systems, and on the use of statistical analysis techniques to provide usable results. Discrete event simulation models of a distributed system can be a powerful tool for the timeliness evaluation of the overall system, but particular care must be taken with the results provided by traditional statistical analysis techniques.
Resumo:
Building reliable real-time applications on top of commercial off-the-shelf (COTS) components is not a straightforward task. Thus, it is essential to provide a simple and transparent programming model, in order to abstract programmers from the low-level implementation details of distribution and replication. However, the recent trend for incorporating pre-emptive multitasking applications in reliable real-time systems inherently increases its complexity. It is therefore important to provide a transparent programming model, enabling pre-emptive multitasking applications to be implemented without resorting to simultaneously dealing with both system requirements and distribution and replication issues. The distributed embedded architecture using COTS components (DEAR-COTS) architecture has been previously proposed as an architecture to support real-time and reliable distributed computer-controlled systems (DCCS) using COTS components. Within the DEAR-COTS architecture, the hard real-time subsystem provides a framework for the development of reliable real-time applications, which are the core of DCCS applications. This paper presents the proposed framework, and demonstrates how it can be used to support the transparent replication of software components.
Resumo:
Managing the physical and compute infrastructure of a large data center is an embodiment of a Cyber-Physical System (CPS). The physical parameters of the data center (such as power, temperature, pressure, humidity) are tightly coupled with computations, even more so in upcoming data centers, where the location of workloads can vary substantially due, for example, to workloads being moved in a cloud infrastructure hosted in the data center. In this paper, we describe a data collection and distribution architecture that enables gathering physical parameters of a large data center at a very high temporal and spatial resolutionof the sensor measurements. We think this is an important characteristic to enable more accurate heat-flow models of the data center andwith them, _and opportunities to optimize energy consumption. Havinga high resolution picture of the data center conditions, also enables minimizing local hotspots, perform more accurate predictive maintenance (pending failures in cooling and other infrastructure equipment can be more promptly detected) and more accurate billing. We detail this architecture and define the structure of the underlying messaging system that is used to collect and distribute the data. Finally, we show the results of a preliminary study of a typical data center radio environment.
Resumo:
Comunicação apresentada na 4th Annual ICPA - International Conference on Public Administration "Building bridges to the future: leadership and collaboration in public administration", na Universidade de Minnesota nos Estados Unidos, de 24 a 26 de setembro de 2008
Resumo:
Dissertation presented at Faculdade de Ciências e Tecnologia of Universidade Nova de Lisboa to obtain the Master degree in Electrical Engineering and Computer Science
Resumo:
Crowdsourcing innovation intermediaries are organizations that mediate the communication and relationship between companies that aspire to solve some problem or to take advantage of any business opportunity with a crowd that is prone to give ideas based on their knowledge, experience and wisdom. A significant part of the activity of these intermediaries is carried out by using a web platform that takes advantage of web 2.0 tools to implement its capabilities. Thus, ontologies are presented as an appropriate strategy to represent the knowledge inherent to this activity and therefore the accomplishment of interoperability between machines and systems. In this paper we present an ontology roadmap for developing crowdsourcing innovation ontology of the intermediation process. We start making a literature review on ontology building, analyze and compare ontologies that propose the development from scratch with the ones that propose reusing other ontologies, and present the criteria for selecting the methodology. We also review enterprise and innovation ontologies known in literature. Finally, are taken some conclusions and presented the roadmap for building crowdsourcing innovation intermediary ontology.
Resumo:
Dissertation submitted in partial fulfillment of the requirements for the Degree of Master of Science in Geospatial Technologies.