952 resultados para Knowledge systems
Resumo:
In the mega-diverse country Peru, a resource intensive development model collides with the interest of conserving biodiversity. Peruvian biodiversity experts have developed different lines of argumentation as to how to integrate conservation into the sustainable development of their country. Applying grounded theory, I define five groups of conservation narratives based on the analysis of 72 qualitative interviews with experts working in areas of biodiversity conservation. I have labeled them: biodiversity protectionists, biodiversity traditionalists, biodiversity localists, biodiversity pragmatists, and biodiversity capitalists. These groups are each discussed in connection with what they have to say about biodiversity in relation to human life, valuation and knowledge systems, participation and leadership, substitutability of natural capital, and its predominant political strategy. In a second step, a comparative analysis of the dominant and diverging political perspectives is made. I argue that by deconstructing underlying premises and ideologies, common ground and possible opportunities for collaboration can be identified. Moreover, although the presented results can serve as a discussion scaffold to organize conservation debates in Peru, this example demonstrates how the terms biodiversity and sustainability are operationalized in conservation narratives.
Resumo:
In database applications, access control security layers are mostly developed from tools provided by vendors of database management systems and deployed in the same servers containing the data to be protected. This solution conveys several drawbacks. Among them we emphasize: 1) if policies are complex, their enforcement can lead to performance decay of database servers; 2) when modifications in the established policies implies modifications in the business logic (usually deployed at the client-side), there is no other possibility than modify the business logic in advance and, finally, 3) malicious users can issue CRUD expressions systematically against the DBMS expecting to identify any security gap. In order to overcome these drawbacks, in this paper we propose an access control stack characterized by: most of the mechanisms are deployed at the client-side; whenever security policies evolve, the security mechanisms are automatically updated at runtime and, finally, client-side applications do not handle CRUD expressions directly. We also present an implementation of the proposed stack to prove its feasibility. This paper presents a new approach to enforce access control in database applications, this way expecting to contribute positively to the state of the art in the field.
Resumo:
To store, update and retrieve data from database management systems (DBMS), software architects use tools, like call-level interfaces (CLI), which provide standard functionalities to interact with DBMS. However, the emerging of NoSQL paradigm, and particularly new NoSQL DBMS providers, lead to situations where some of the standard functionalities provided by CLI are not supported, very often due to their distance from the relational model or due to design constraints. As such, when a system architect needs to evolve, namely from a relational DBMS to a NoSQL DBMS, he must overcome the difficulties conveyed by the features not provided by NoSQL DBMS. Choosing the wrong NoSQL DBMS risks major issues with components requesting non-supported features. This paper focuses on how to deploy features that are not so commonly supported by NoSQL DBMS (like Stored Procedures, Transactions, Save Points and interactions with local memory structures) by implementing them in standard CLI.
Resumo:
A picture tells a thousand words. We all know that. Then why are our development tools showing mainly text with so much obstinacy? Even when visualizations do make it into our tools, they typically do not make it past the periphery. Something is deeply wrong. We argue that visualizations must become pervasive in software development, and to accommodate this goal, the integrated development environments must change significantly.
Resumo:
Visualisation provides good support for software analysis. It copes with the intangible nature of software by providing concrete representations of it. By reducing the complexity of software, visualisations are especially useful when dealing with large amounts of code. One domain that usually deals with large amounts of source code data is empirical analysis. Although there are many tools for analysis and visualisation, they do not cope well software corpora. In this paper we present Explora, an infrastructure that is specifically targeted at visualising corpora. We report on early results when conducting a sample analysis on Smalltalk and Java corpora.
Resumo:
Program comprehension requires developers to reason about many kinds of highly interconnected software entities. Dealing with this reality prompts developers to continuously intertwine searching and navigation. Nevertheless, most integrated development environments (IDEs) address searching by means of many disconnected search tools, making it difficult for developers to reuse search results produced by one search tool as input for another search tool. This forces developers to spend considerable time manually linking disconnected search results. To address this issue we propose Spotter, a model for expressing and combining search tools in a unified way. The current implementation shows that Spotter can unify a wide range of search tools. More information about Spotter can be found at scg.unibe.ch/research/moldablespotter
Resumo:
7th Mediterranean Conference on Information Systems, MCIS 2012, Guimaraes, Portugal, September 8-10, 2012, Proceedings Series: Lecture Notes in Business Information Processing, Vol. 129
Resumo:
We are working on the confluence of knowledge management, organizational memory and emergent knowledge with the lens of complex adaptive systems. In order to be fundamentally sustainable organizations search for an adaptive need for managing ambidexterity of day-to-day work and innovation. An organization is an entity of a systemic nature, composed of groups of people who interact to achieve common objectives, making it necessary to capture, store and share interactions knowledge with the organization, this knowledge can be generated in intra-organizational or inter-organizational level. The organizations have organizational memory of knowledge of supported on the Information technology and systems. Each organization, especially in times of uncertainty and radical changes, to meet the demands of the environment, needs timely and sized knowledge on the basis of tacit and explicit. This sizing is a learning process resulting from the interaction that emerges from the relationship between the tacit and explicit knowledge and which we are framing within an approach of Complex Adaptive Systems. The use of complex adaptive systems for building the emerging interdependent relationship, will produce emergent knowledge that will improve the organization unique developing.
Resumo:
This paper presents the results of an exploratory study on knowledge management in Portuguese organizations. The study was based on a survey sent to one hundred of the main Portuguese organizations, in order to know their current practices relating knowledge management systems (KMS) usage and intellectual capital (IC) measurement. With this study, we attempted to understand what are the main tools used to support KM processes and activities in the organizations, and what metrics are pointed by organizations to measure their knowledge assets.
Resumo:
Dissertation presented to obtain the Ph.D degree in Bioinformatics
Resumo:
Madine Darby Canine Kidney (MDCK) cell lines have been extensively evaluated for their potential as host cells for influenza vaccine production. Recent studies allowed the cultivation of these cells in a fully defined medium and in suspension. However, reaching high cell densities in animal cell cultures still remains a challenge. To address this shortcoming, a combined methodology allied with knowledge from systems biology was reported to study the impact of the cell environment on the flux distribution. An optimization of the medium composition was proposed for both a batch and a continuous system in order to reach higher cell densities. To obtain insight into the metabolic activity of these cells, a detailed metabolic model previously developed by Wahl A. et. al was used. The experimental data of four cultivations of MDCK suspension cells, grown under different conditions and used in this work came from the Max Planck Institute, Magdeburg, Germany. Classical metabolic flux analysis (MFA) was used to estimate the intracellular flux distribution of each cultivation and then combined with partial least squares (PLS) method to establish a link between the estimated metabolic state and the cell environment. The validation of the MFA model was made and its consistency checked. The resulted PLS model explained almost 70% of the variance present in the flux distribution. The medium optimization for the continuous system and for the batch system resulted in higher biomass growth rates than the ones obtained experimentally, 0.034 h-1 and 0.030 h-1, respectively, thus reducing in almost 10 hours the duplication time. Additionally, the optimal medium obtained for the continuous system almost did not consider pyruvate. Overall the proposed methodology seems to be effective and both proposed medium optimizations seem to be promising to reach high cell densities.
Resumo:
In recent years a set of production paradigms were proposed in order to capacitate manufacturers to meet the new market requirements, such as the shift in demand for highly customized products resulting in a shorter product life cycle, rather than the traditional mass production standardized consumables. These new paradigms advocate solutions capable of facing these requirements, empowering manufacturing systems with a high capacity to adapt along with elevated flexibility and robustness in order to deal with disturbances, like unexpected orders or malfunctions. Evolvable Production Systems propose a solution based on the usage of modularity and self-organization with a fine granularity level, supporting pluggability and in this way allowing companies to add and/or remove components during execution without any extra re-programming effort. However, current monitoring software was not designed to fully support these characteristics, being commonly based on centralized SCADA systems, incapable of re-adapting during execution to the unexpected plugging/unplugging of devices nor changes in the entire system’s topology. Considering these aspects, the work developed for this thesis encompasses a fully distributed agent-based architecture, capable of performing knowledge extraction at different levels of abstraction without sacrificing the capacity to add and/or remove monitoring entities, responsible for data extraction and analysis, during runtime.