960 resultados para information flow


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Pós-graduação em Psicologia - FCLAS

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Pós-graduação em Psicologia - FCLAS

Relevância:

60.00% 60.00%

Publicador:

Resumo:

As ligações e interações propiciadas pelas redes sociais permitem compreender como ocorrem os fluxos de informação entre indivíduos e instituições que unem esforços na busca de metas comuns. O artigo apresenta aspectos conceituais sobre redes e redes sociais ressaltando que a estrutura e as relações de interação e intermediação entre os elos da rede impulsionam mudanças nos fluxos de informação. Descreve a metodologia de Análise de Redes Sociais (ARS) sinalizando como esta pode ser utilizada na área da Ciência da Informação para compreender os fluxos de informação que se configuram e re-configuram nas redes sociais a partir da estrutura de relacionamento

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Abstract Background To understand the molecular mechanisms underlying important biological processes, a detailed description of the gene products networks involved is required. In order to define and understand such molecular networks, some statistical methods are proposed in the literature to estimate gene regulatory networks from time-series microarray data. However, several problems still need to be overcome. Firstly, information flow need to be inferred, in addition to the correlation between genes. Secondly, we usually try to identify large networks from a large number of genes (parameters) originating from a smaller number of microarray experiments (samples). Due to this situation, which is rather frequent in Bioinformatics, it is difficult to perform statistical tests using methods that model large gene-gene networks. In addition, most of the models are based on dimension reduction using clustering techniques, therefore, the resulting network is not a gene-gene network but a module-module network. Here, we present the Sparse Vector Autoregressive model as a solution to these problems. Results We have applied the Sparse Vector Autoregressive model to estimate gene regulatory networks based on gene expression profiles obtained from time-series microarray experiments. Through extensive simulations, by applying the SVAR method to artificial regulatory networks, we show that SVAR can infer true positive edges even under conditions in which the number of samples is smaller than the number of genes. Moreover, it is possible to control for false positives, a significant advantage when compared to other methods described in the literature, which are based on ranks or score functions. By applying SVAR to actual HeLa cell cycle gene expression data, we were able to identify well known transcription factor targets. Conclusion The proposed SVAR method is able to model gene regulatory networks in frequent situations in which the number of samples is lower than the number of genes, making it possible to naturally infer partial Granger causalities without any a priori information. In addition, we present a statistical test to control the false discovery rate, which was not previously possible using other gene regulatory network models.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This research was designed to answer the question of which direction the restructuring of financial regulators should take – consolidation or fragmentation. This research began by examining the need for financial regulation and its related costs. It then continued to describe what types of regulatory structures exist in the world; surveying the regulatory structures in 15 jurisdictions, comparing them and discussing their strengths and weaknesses. This research analyzed the possible regulatory structures using three methodological tools: Game-Theory, Institutional-Design, and Network-Effects. The incentives for regulatory action were examined in Chapter Four using game theory concepts. This chapter predicted how two regulators with overlapping supervisory mandates will behave in two different states of the world (where they can stand to benefit from regulating and where they stand to lose). The insights derived from the games described in this chapter were then used to analyze the different supervisory models that exist in the world. The problem of information-flow was discussed in Chapter Five using tools from institutional design. The idea is based on the need for the right kind of information to reach the hands of the decision maker in the shortest time possible in order to predict, mitigate or stop a financial crisis from occurring. Network effects and congestion in the context of financial regulation were discussed in Chapter Six which applied the literature referring to network effects in general in an attempt to conclude whether consolidating financial regulatory standards on a global level might also yield other positive network effects. Returning to the main research question, this research concluded that in general the fragmented model should be preferable to the consolidated model in most cases as it allows for greater diversity and information-flow. However, in cases in which close cooperation between two authorities is essential, the consolidated model should be used.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

BACKGROUND: A precise, non-invasive, non-toxic, repeatable, convenient and inexpensive follow-up of renal transplants, especially following biopsies, is in the interest of nephrologists. Formerly, the rate of biopsies leading to AV fistulas had been underestimated. Imaging procedures suited to a detailed judgement of these vascular malformations are to be assessed. METHODS: Three-dimensional (3D) reconstruction techniques of ultrasound flow-directed and non-flow-directed energy mode pictures were compared with a standard procedure, gadolinium-enhanced nuclear magnetic resonance imaging angiography (MRA) using the phase contrast technique. RESULTS: Using B-mode and conventional duplex information, AV fistulas were localized in the upper pole of the kidney transplant of the index patient. The 3D reconstruction provided information about the exact localization and orientation of the fistula in relation to other vascular structures, and the flow along the fistula. The MRA provided localization and orientation information, but less functional information. Flow-directed and non-flow-directed energy mode pictures could be reconstructed to provide 3D information about vascular malformations in transplanted kidneys. CONCLUSION: In transplanted kidneys, 3D-ultrasound angiography may be equally as effective as MRA in localizing and identifying AV malformations. Advantages of the ultrasound method are that it is cheaper, non-toxic, non-invasive, more widely availability and that it even provides more functional information. Future prospective studies will be necessary to evaluate the two techniques further.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Der Kommissionierprozess stellt im Rahmen der innerbetrieblichen Logistik - gerade auch im Hinblick auf Just-In-Time-Lieferungen und Fragen der Produkthaftung - einen zentralen Baustein des Material- und Informationsflusses in Unternehmen dar. Dabei ist die Wahl des Kommissioniersystems ausschlaggebend für die Optimierung der personal- und zeitaufwendigen Kommissioniervorgänge und dient damit zur Leistungssteigerung unter gleichzeitiger Reduzierung der Fehlerquote.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Zur Optimierung innerbetrieblicher Logistikprozesse ist eine ganzheitliche Prozessdarstellung unter Berücksichtigung von Material-, Informationsfluss und der eingesetzten Ressourcen erforderlich. In diesem Aufsatz werden verschiedene, häufig verwendete Methoden zur Prozessdarstellung diesbezüglich miteinander verglichen und bewertet. Die verschiedenen Stärken und Schwächen werden in Form eines Benchmarks zusammengefasst, das als Grundlage für eine neue Methode dient, die im Rahmen des IGF-Forschungsprojekts 16187 N/1 erarbeitet wurde.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

An understanding of interruptions in healthcare is important for the design, implementation, and evaluation of health information systems and for the management of clinical workflow and medical errors. The purpose of this study is to identify and classify the types of interruptions experienced by Emergency Department(ED) nurses working in a Level One Trauma Center. This was an observational field study of Registered Nurses (RNs) employed in a Level One Trauma Center using the shadowing method. Results of the study indicate that nurses were both recipients and initiators of interruptions. Telephones, pagers, and face-to-face conversations were the most common sources of interruptions. Unlike other industries, the healthcare community has not systematically studied interruptions in clinical settings to determine and weigh the necessity of the interruption against their sometimes negative results such as medical errors, decreased efficiency, and increased costs. Our study presented here is an initial step to understand the nature, causes, and effects of interruptions, thereby improving both the quality of healthcare and patient safety. We developed an ethnographic data collection technique and a data coding method for the capturing and analysis of interruptions. The interruption data we collected are systematic, comprehensive, and close to exhaustive. They confirmed the findings from earlier studies by other researchers that interruptions are frequent events in critical care and other healthcare settings. We are currently using these data to analyze the workflow dynamics of ED clinicians, to identify the bottlenecks of information flow, and to develop interventions to improve the efficiency of emergency care through the management of interruptions.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

An understanding of interruptions in healthcare is important for the design, implementation, and evaluation of health information systems and for the management of clinical workflow and medical errors. The purpose of this study is to identify and classify the types of interruptions experienced by ED nurses working in a Level One Trauma Center. This was an observational field study of Registered Nurses employed in a Level One Trauma Center using the shadowing method. Results of the study indicate that nurses were both recipients and initiators of interruptions. Telephone, pagers, and face-to-face conversations were the most common sources of interruptions. Unlike other industries, the outcomes caused by interruptions resulting in medical errors, decreased efficiency and increased cost have not been systematically studied in healthcare. Our study presented here is an initial step to understand the nature, causes, and effects of interruptions, and to develop interventions to manage interruptions to improve healthcare quality and patient safety. We developed an ethnographic data collection technique and a data coding method for the capturing and analysis of interruptions. The interruption data we collected are systematic, comprehensive, and close to exhaustive. They confirmed the findings from early studies by other researchers that interruptions are frequent events in critical care and other healthcare settings. We are currently using these data to analyze the workflow dynamics of ED clinicians, identify the bottlenecks of information flow, and develop interventions to improve the efficiency of emergency care through the management of interruptions.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Quantitative EEG (qEEG) has modified our understanding of epileptic seizures, shifting our view from the traditionally accepted hyper-synchrony paradigm toward more complex models based on re-organization of functional networks. However, qEEG measurements are so far rarely considered during the clinical decision-making process. To better understand the dynamics of intracranial EEG signals, we examine a functional network derived from the quantification of information flow between intracranial EEG signals. Using transfer entropy, we analyzed 198 seizures from 27 patients undergoing pre-surgical evaluation for pharmaco-resistant epilepsy. During each seizure we considered for each network the in-, out- and total "hubs", defined respectively as the time and the EEG channels with the maximal incoming, outgoing or total (bidirectional) information flow. In the majority of cases we found that the hubs occur around the middle of seizures, and interestingly not at the beginning or end, where the most dramatic EEG signal changes are found by visual inspection. For the patients who then underwent surgery, good postoperative clinical outcome was on average associated with a higher percentage of out- or total-hubs located in the resected area (for out-hubs p = 0.01, for total-hubs p = 0.04). The location of in-hubs showed no clear predictive value. We conclude that the study of functional networks based on qEEG measurements may help to identify brain areas that are critical for seizure generation and are thus potential targets for focused therapeutic interventions.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

onceptual design phase is partially supported by product lifecycle management/computer-aided design (PLM/CAD) systems causing discontinuity of the design information flow: customer needs — functional requirements — key characteristics — design parameters (DPs) — geometric DPs. Aiming to address this issue, it is proposed a knowledge-based approach is proposed to integrate quality function deployment, failure mode and effects analysis, and axiomatic design into a commercial PLM/CAD system. A case study, main subject of this article, was carried out to validate the proposed process, to evaluate, by a pilot development, how the commercial PLM/CAD modules and application programming interface could support the information flow, and based on the pilot scheme results to propose a full development framework.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Today?s knowledge management (KM) systems seldom account for language management and, especially, multilingual information processing. Document management is one of the strongest components of KM systems. If these systems do not include a multilingual knowledge management policy, intranet searches, excessive document space occupancy and redundant information slow down what are the most effective processes in a single language environment. In this paper, we model information flow from the sources of knowledge to the persons/systems searching for specific information. Within this framework, we focus on the importance of multilingual information processing, which is a hugely complex component of modern organizations.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

La seguridad verificada es una metodología para demostrar propiedades de seguridad de los sistemas informáticos que se destaca por las altas garantías de corrección que provee. Los sistemas informáticos se modelan como programas probabilísticos y para probar que verifican una determinada propiedad de seguridad se utilizan técnicas rigurosas basadas en modelos matemáticos de los programas. En particular, la seguridad verificada promueve el uso de demostradores de teoremas interactivos o automáticos para construir demostraciones completamente formales cuya corrección es certificada mecánicamente (por ordenador). La seguridad verificada demostró ser una técnica muy efectiva para razonar sobre diversas nociones de seguridad en el área de criptografía. Sin embargo, no ha podido cubrir un importante conjunto de nociones de seguridad “aproximada”. La característica distintiva de estas nociones de seguridad es que se expresan como una condición de “similitud” entre las distribuciones de salida de dos programas probabilísticos y esta similitud se cuantifica usando alguna noción de distancia entre distribuciones de probabilidad. Este conjunto incluye destacadas nociones de seguridad de diversas áreas como la minería de datos privados, el análisis de flujo de información y la criptografía. Ejemplos representativos de estas nociones de seguridad son la indiferenciabilidad, que permite reemplazar un componente idealizado de un sistema por una implementación concreta (sin alterar significativamente sus propiedades de seguridad), o la privacidad diferencial, una noción de privacidad que ha recibido mucha atención en los últimos años y tiene como objetivo evitar la publicación datos confidenciales en la minería de datos. La falta de técnicas rigurosas que permitan verificar formalmente este tipo de propiedades constituye un notable problema abierto que tiene que ser abordado. En esta tesis introducimos varias lógicas de programa quantitativas para razonar sobre esta clase de propiedades de seguridad. Nuestra principal contribución teórica es una versión quantitativa de una lógica de Hoare relacional para programas probabilísticos. Las pruebas de correción de estas lógicas son completamente formalizadas en el asistente de pruebas Coq. Desarrollamos, además, una herramienta para razonar sobre propiedades de programas a través de estas lógicas extendiendo CertiCrypt, un framework para verificar pruebas de criptografía en Coq. Confirmamos la efectividad y aplicabilidad de nuestra metodología construyendo pruebas certificadas por ordendor de varios sistemas cuyo análisis estaba fuera del alcance de la seguridad verificada. Esto incluye, entre otros, una meta-construcción para diseñar funciones de hash “seguras” sobre curvas elípticas y algoritmos diferencialmente privados para varios problemas de optimización combinatoria de la literatura reciente. ABSTRACT The verified security methodology is an emerging approach to build high assurance proofs about security properties of computer systems. Computer systems are modeled as probabilistic programs and one relies on rigorous program semantics techniques to prove that they comply with a given security goal. In particular, it advocates the use of interactive theorem provers or automated provers to build fully formal machine-checked versions of these security proofs. The verified security methodology has proved successful in modeling and reasoning about several standard security notions in the area of cryptography. However, it has fallen short of covering an important class of approximate, quantitative security notions. The distinguishing characteristic of this class of security notions is that they are stated as a “similarity” condition between the output distributions of two probabilistic programs, and this similarity is quantified using some notion of distance between probability distributions. This class comprises prominent security notions from multiple areas such as private data analysis, information flow analysis and cryptography. These include, for instance, indifferentiability, which enables securely replacing an idealized component of system with a concrete implementation, and differential privacy, a notion of privacy-preserving data mining that has received a great deal of attention in the last few years. The lack of rigorous techniques for verifying these properties is thus an important problem that needs to be addressed. In this dissertation we introduce several quantitative program logics to reason about this class of security notions. Our main theoretical contribution is, in particular, a quantitative variant of a full-fledged relational Hoare logic for probabilistic programs. The soundness of these logics is fully formalized in the Coq proof-assistant and tool support is also available through an extension of CertiCrypt, a framework to verify cryptographic proofs in Coq. We validate the applicability of our approach by building fully machine-checked proofs for several systems that were out of the reach of the verified security methodology. These comprise, among others, a construction to build “safe” hash functions into elliptic curves and differentially private algorithms for several combinatorial optimization problems from the recent literature.