824 resultados para information systems theory
Resumo:
We recover and develop some robotic systems concepts (on the light of present systems tools) that were originated for an intended Mars Rover in the sixties of the last century at the Instrumentation Laboratory of MIT, where one of the authors was involved. The basic concepts came from the specifications for a type of generalized robot inspired in the structure of the vertebrate nervous systems, where the decision system was based in the structure and function of the Reticular Formation (RF). The vertebrate RF is supposed to commit the whole organism to one among various modes of behavior, so taking the decisions about the present overall task. That is, it is a kind of control and command system. In this concepts updating, the basic idea is that the RF comprises a set of computing units such that each computing module receives information only from a reduced part of the overall, little processed sensory inputs. Each computing unit is capable of both general diagnostics about overall input situations and of specialized diagnostics according to the values of a concrete subset of the input lines. Slave systems to this command and control computer, there are the sensors, the representations of external environment, structures for modeling and planning and finally, the effectors acting in the external world.
Resumo:
Arizona Department of Transportation, Phoenix
Resumo:
Federal Highway Administration, Office of Research, Washington, D.C.
Resumo:
Federal Highway Administration, Office of Research, Washington, D.C.
Resumo:
Federal Highway Administration, Office of Research, Washington, D.C.
Resumo:
"B-284472"--P. 3.
Resumo:
A joint project of the Chief Financial Officers Council and the Joint Financial Management Improvement Program.
Resumo:
For many years in the area of business systems analysis and design, practitioners and researchers alike have been searching for some comprehensive basis on which to evaluate, compare, and engineer techniques that are promoted for use in the modelling of systems' requirements. To date, while many frameworks, factors, and facets have been forthcoming, none appear to be based on a sound theory. In light of this dilemma, over the last 10 years, attention has been devoted by researchers to the use of ontology to provide some theoretical basis for the advancement of the business systems modelling discipline. This paper outlines how we have used a particular ontology for this purpose over the last five years. In particular we have learned that the understandability and the applicability of the selected ontology must be clear for IS professionals, the results of any ontological evaluation must be tempered by economic efficiency considerations of the stakeholders involved, and ontologies may have to be focused for the business purpose and type of user involved in the modelling situation.
Resumo:
The data structure of an information system can significantly impact the ability of end users to efficiently and effectively retrieve the information they need. This research develops a methodology for evaluating, ex ante, the relative desirability of alternative data structures for end user queries. This research theorizes that the data structure that yields the lowest weighted average complexity for a representative sample of information requests is the most desirable data structure for end user queries. The theory was tested in an experiment that compared queries from two different relational database schemas. As theorized, end users querying the data structure associated with the less complex queries performed better Complexity was measured using three different Halstead metrics. Each of the three metrics provided excellent predictions of end user performance. This research supplies strong evidence that organizations can use complexity metrics to evaluate, ex ante, the desirability of alternate data structures. Organizations can use these evaluations to enhance the efficient and effective retrieval of information by creating data structures that minimize end user query complexity.
Resumo:
The schema of an information system can significantly impact the ability of end users to efficiently and effectively retrieve the information they need. Obtaining quickly the appropriate data increases the likelihood that an organization will make good decisions and respond adeptly to challenges. This research presents and validates a methodology for evaluating, ex ante, the relative desirability of alternative instantiations of a model of data. In contrast to prior research, each instantiation is based on a different formal theory. This research theorizes that the instantiation that yields the lowest weighted average query complexity for a representative sample of information requests is the most desirable instantiation for end-user queries. The theory was validated by an experiment that compared end-user performance using an instantiation of a data structure based on the relational model of data with performance using the corresponding instantiation of the data structure based on the object-relational model of data. Complexity was measured using three different Halstead metrics: program length, difficulty, and effort. For a representative sample of queries, the average complexity using each instantiation was calculated. As theorized, end users querying the instantiation with the lower average complexity made fewer semantic errors, i.e., were more effective at composing queries. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
Electronic communications devices intended for government or military applications must be rigorously evaluated to ensure that they maintain data confidentiality. High-grade information security evaluations require a detailed analysis of the device's design, to determine how it achieves necessary security functions. In practice, such evaluations are labour-intensive and costly, so there is a strong incentive to find ways to make the process more efficient. In this paper we show how well-known concepts from graph theory can be applied to a device's design to optimise information security evaluations. In particular, we use end-to-end graph traversals to eliminate components that do not need to be evaluated at all, and minimal cutsets to identify the smallest group of components that needs to be evaluated in depth.
Resumo:
The problem of distributed compression for correlated quantum sources is considered. The classical version of this problem was solved by Slepian and Wolf, who showed that distributed compression could take full advantage of redundancy in the local sources created by the presence of correlations. Here it is shown that, in general, this is not the case for quantum sources, by proving a lower bound on the rate sum for irreducible sources of product states which is stronger than the one given by a naive application of Slepian-Wolf. Nonetheless, strategies taking advantage of correlation do exist for some special classes of quantum sources. For example, Devetak and Winter demonstrated the existence of such a strategy when one of the sources is classical. Optimal nontrivial strategies for a different extreme, sources of Bell states, are presented here. In addition, it is explained how distributed compression is connected to other problems in quantum information theory, including information-disturbance questions, entanglement distillation and quantum error correction.