948 resultados para Information Model


Relevância:

60.00% 60.00%

Publicador:

Resumo:

We study the fundamental Byzantine leader election problem in dynamic networks where the topology can change from round to round and nodes can also experience heavy {\em churn} (i.e., nodes can join and leave the network continuously over time). We assume the full information model where the Byzantine nodes have complete knowledge about the entire state of the network at every round (including random choices made by all the nodes), have unbounded computational power and can deviate arbitrarily from the protocol. The churn is controlled by an adversary that has complete knowledge and control over which nodes join and leave and at what times and also may rewire the topology in every round and has unlimited computational power, but is oblivious to the random choices made by the algorithm. Our main contribution is an $O(\log^3 n)$ round algorithm that achieves Byzantine leader election under the presence of up to $O({n}^{1/2 - \epsilon})$ Byzantine nodes (for a small constant $\epsilon > 0$) and a churn of up to \\$O(\sqrt{n}/\poly\log(n))$ nodes per round (where $n$ is the stable network size).The algorithm elects a leader with probability at least $1-n^{-\Omega(1)}$ and guarantees that it is an honest node with probability at least $1-n^{-\Omega(1)}$; assuming the algorithm succeeds, the leader's identity will be known to a $1-o(1)$ fraction of the honest nodes. Our algorithm is fully-distributed, lightweight, and is simple to implement. It is also scalable, as it runs in polylogarithmic (in $n$) time and requires nodes to send and receive messages of only polylogarithmic size per round.To the best of our knowledge, our algorithm is the first scalable solution for Byzantine leader election in a dynamic network with a high rate of churn; our protocol can also be used to solve Byzantine agreement in a straightforward way.We also show how to implement an (almost-everywhere) public coin with constant bias in a dynamic network with Byzantine nodes and provide a mechanism for enabling honest nodes to store information reliably in the network, which might be of independent interest.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Relatório de estágio apresentado à Escola Superior de Comunicação Social como parte dos requisitos para obtenção de grau de mestre em Jornalismo.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Depuis quelques années, les statistiques indiquent une croissance exponentielle de l’incidence de certaines infections transmissibles sexuellement chez les jeunes adultes. Certaines enquêtes témoignent en outre des comportements peu responsables en matière de santé sexuelle chez cette population, bien que l’offre d’information sur les conséquences de tels comportements soit importante et diversifiée. Par ailleurs, le comportement informationnel de cette population en matière de santé sexuelle demeure peu documenté. La présente étude porte sur le comportement informationnel de jeunes adultes québécois en matière de santé sexuelle. Plus spécifiquement, elle répond aux quatre questions de recherche suivantes : (1) Quelles sont les situations problématiques auxquelles les jeunes adultes sont confrontés en santé sexuelle?, (2) Quels sont les besoins informationnels exprimés par les jeunes adultes lors de ces situations problématiques?, (3) Quels sont les processus et les sources d’information qui soutiennent la résolution de ces besoins informationnels? et (4) Quelle est l’utilisation de l’information trouvée? Cette recherche descriptive a utilisé une approche qualitative. Le milieu retenu est l’Université de Montréal pour deux raisons : il s’agit d’un milieu cognitivement riche qui fournit un accès sur place à des ressources en santé sexuelle. Les huit jeunes adultes âgés de 18 à 25 ans qui ont pris part à cette étude ont participé à une entrevue en profondeur utilisant la technique de l’incident critique. Chacun d’entre eux a décrit une situation problématique par rapport à sa santé sexuelle et les données recueillies ont été l’objet d’une analyse de contenu basée sur la théorisation ancrée. Les résultats indiquent que les jeunes adultes québécois vivent des situations problématiques relatives à l’aspect physique de leur santé sexuelle qui peuvent être déclenchées par trois types d’éléments : un événement à risques, un symptôme physique subjectif et de l’information acquise passivement. Ces situations problématiques génèrent trois catégories de besoins informationnels : l’état de santé actuel, les conséquences possibles et les remèdes. Pour répondre à ces besoins, les participants se sont tournés en majorité vers des sources professionnelles, personnelles et verbales. La présence de facteurs contextuels, cognitifs et affectifs a particularisé leur processus de recherche d’information en modifiant les combinaisons des quatre activités effectuées, soit débuter, enchaîner, butiner et différencier. L’automotivation et la compréhension du problème représentent les deux principales utilisations de l’information. D’un point de vue théorique, les résultats indiquent que le modèle général de comportement informationnel de Choo (2006), le modèle d’environnement d’utilisation de l’information de Taylor (1986, 1991) et le modèle d’activités de recherche d’information d’Ellis (1989a, 1989b, 2005) peuvent être utilisés dans le contexte personnel de la santé sexuelle. D’un point de vue pratique, cette étude ajoute aux connaissances sur les critères de sélection des sources d’information en matière de santé sexuelle.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

En el siglo XXI, las bibliotecas buscaron formas para revitalizarse en su misión de transmitir el conocimiento, respondiendo a los cambiaos que afectan el ambiente universitario en el mundo entero. En Europa, el establecimiento del 'Ambiente Europeo de Enseñanza Superior', por la Declaración de Bologna (1999), tuvo como consecuencia la proposición de un nuevo modelo de unidad de información: Centros de Recursos para el Aprendizaje e Investigación (CRAI). El modelo se basa en la interacción entre docentes y alumnos con los recursos de información, creando un ambiente virtual de aprendizaje. La propuesta del CRAI puede representar una alternativa viable para el desarrollo de las bibliotecas universitarias brasileñas. En este sentido, se proponen algunas reflexiones basadas en la contraposición de este modelo a la realidad de las universidades brasileñas.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

En el proceso de diseño se toman decisiones que pueden afectar a la fabricabilidad del producto. Cuando el diseñador es experto, considera las limitaciones, las propiedades y el coste de fabricación en la fase de materialización o de detalle. El problema surge cuando el diseñador no es experto o cuando no hay suficiente información y conocimiento de fabricación disponible. Tomando como referencia la teoría de Diseño Axiomático y las técnicas de DFM, se propone una metodología para identificar, definir y formalizar la información de fabricación que debería estar disponible en el diseño para diseñar para fabricar (DFM). También se propone un prototipo de modelo de información para desarrollar una futura herramienta informática que facilitaría la aplicación de esta metodología y que permitiría guiar al diseñador durante el diseño. La metodología ha sido aplicada a una biela de un motor de combustión interna alternativo (MCIA), y a los procesos que se están usando actualmente para fabricarla: forja en matriz cerrada y forja de polvo de metal.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Much consideration is rightly given to the design of metadata models to describe data. At the other end of the data-delivery spectrum much thought has also been given to the design of geospatial delivery interfaces such as the Open Geospatial Consortium standards, Web Coverage Service (WCS), Web Map Server and Web Feature Service (WFS). Our recent experience with the Climate Science Modelling Language shows that an implementation gap exists where many challenges remain unsolved. To bridge this gap requires transposing information and data from one world view of geospatial climate data to another. Some of the issues include: the loss of information in mapping to a common information model, the need to create ‘views’ onto file-based storage, and the need to map onto an appropriate delivery interface (as with the choice between WFS and WCS for feature types with coverage-valued properties). Here we summarise the approaches we have taken in facing up to these problems.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Climate modeling is a complex process, requiring accurate and complete metadata in order to identify, assess and use climate data stored in digital repositories. The preservation of such data is increasingly important given the development of ever-increasingly complex models to predict the effects of global climate change. The EU METAFOR project has developed a Common Information Model (CIM) to describe climate data and the models and modelling environments that produce this data. There is a wide degree of variability between different climate models and modelling groups. To accommodate this, the CIM has been designed to be highly generic and flexible, with extensibility built in. METAFOR describes the climate modelling process simply as "an activity undertaken using software on computers to produce data." This process has been described as separate UML packages (and, ultimately, XML schemas). This fairly generic structure canbe paired with more specific "controlled vocabularies" in order to restrict the range of valid CIM instances. The CIM will aid digital preservation of climate models as it will provide an accepted standard structure for the model metadata. Tools to write and manage CIM instances, and to allow convenient and powerful searches of CIM databases,. Are also under development. Community buy-in of the CIM has been achieved through a continual process of consultation with the climate modelling community, and through the METAFOR team’s development of a questionnaire that will be used to collect the metadata for the Intergovernmental Panel on Climate Change’s (IPCC) Coupled Model Intercomparison Project Phase 5 (CMIP5) model runs.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The Metafor project has developed a common information model (CIM) using the ISO19100 series for- malism to describe numerical experiments carried out by the Earth system modelling community, the models they use, and the simulations that result. Here we describe the mechanism by which the CIM was developed, and its key properties. We introduce the conceptual and application ver- sions and the controlled vocabularies developed in the con- text of supporting the fifth Coupled Model Intercomparison Project (CMIP5). We describe how the CIM has been used in experiments to describe model coupling properties and de- scribe the near term expected evolution of the CIM.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Data quality is a difficult notion to define precisely, and different communities have different views and understandings of the subject. This causes confusion, a lack of harmonization of data across communities and omission of vital quality information. For some existing data infrastructures, data quality standards cannot address the problem adequately and cannot fulfil all user needs or cover all concepts of data quality. In this study, we discuss some philosophical issues on data quality. We identify actual user needs on data quality, review existing standards and specifications on data quality, and propose an integrated model for data quality in the field of Earth observation (EO). We also propose a practical mechanism for applying the integrated quality information model to a large number of datasets through metadata inheritance. While our data quality management approach is in the domain of EO, we believe that the ideas and methodologies for data quality management can be applied to wider domains and disciplines to facilitate quality-enabled scientific research.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Health care provision is significantly impacted by the ability of the health providers to engineer a viable healthcare space to support care stakeholders needs. In this paper we discuss and propose use of organisational semiotics as a set of methods to link stakeholders to systems, which allows us to capture clinician activity, information transfer, and building use; which in tern allows us to define the value of specific systems in the care environment to specific stakeholders and the dependence between systems in a care space. We suggest use of a semantically enhanced building information model (BIM) to support the linking of clinician activity to the physical resource objects and space; and facilitate the capture of quantifiable data, over time, concerning resource use by key stakeholders. Finally we argue for the inclusion of appropriate stakeholder feedback and persuasive mechanism, to incentivise building user behaviour to support organisational level sustainability policy.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The goal of this paper is to show the possibility of a non-monotone relation between coverage ans risk which has been considered in the literature of insurance models since the work of Rothschild and Stiglitz (1976). We present an insurance model where the insured agents have heterogeneity in risk aversion and in lenience (a prevention cost parameter). Risk aversion is described by a continuous parameter which is correlated with lenience and for the sake of simplicity, we assume perfect correlation. In the case of positive correlation, the more risk averse agent has higher cosr of prevention leading to a higher demand for coverage. Equivalently, the single crossing property (SCP) is valid and iplies a positive correlation between overage and risk in equilibrium. On the other hand, if the correlation between risk aversion and lenience is negative, not only may the SCP be broken, but also the monotonocity of contracts, i.e., the prediction that high (low) risk averse types choose full (partial) insurance. In both cases riskiness is monotonic in risk aversion, but in the last case there are some coverage levels associated with two different risks (low and high), which implies that the ex-ante (with respect to the risk aversion distribution) correlation between coverage and riskiness may have every sign (even though the ex-post correlation is always positive). Moreover, using another instrument (a proxy for riskiness), we give a testable implication to desentangle single crossing ans non single croosing under an ex-post zero correlation result: the monotonicity of coverage as a function os riskiness. Since by controlling for risk aversion (no asymmetric information), coverage is monotone function of riskiness, this also fives a test for asymmetric information. Finally, we relate this theoretical results to empirical tests in the recent literature, specially the Dionne, Gouruéroux and Vanasse (2001) work. In particular, they found an empirical evidence that seems to be compatible with asymmetric information and non single crossing in our framework. More generally, we build a hidden information model showing how omitted variables (asymmetric information) can bias the sign of the correlation of equilibrium variables conditioning on all observable variables. We show that this may be the case when the omitted variables have a non-monotonic relation with the observable ones. Moreover, because this non-dimensional does not capture this deature. Hence, our main results is to point out the importance of the SPC in testing predictions of the hidden information models.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The goal of t.his paper is to show the possibility of a non-monot.one relation between coverage and risk which has been considered in the literature of insurance models since the work of Rothschild and Stiglitz (1976). We present an insurance model where the insured agents have heterogeneity in risk aversion and in lenience (a prevention cost parameter). Risk aversion is described by a continuou.'l parameter which is correlated with lenience and, for the sake of simplicity, we assume perfect correlation. In the case of positive correlation, the more risk averse agent has higher cost of prevention leading to a higher demand for coverage. Equivalently, the single crossing property (SCP) is valid and implies a positive correlation between coverage and risk in equilibrium. On the other hand, if the correlation between risk aversion and lenience is negative, not only may the sep be broken, but also the monotonicity of contracts, i.e., the prediction that high (Iow) risk averse types choose full (partial) insurance. In both cases riskiness is monotonic in risk aversion, but in the last case t,here are some coverage leveIs associated with two different risks (low and high), which implies that the ex-ante (with respect to the risk aversion distribution) correlation bet,ween coverage and riskiness may have every sign (even though the ex-post correlation is always positive). Moreover, using another instrument (a proxy for riskiness), we give a testable implication to disentangle single crossing and non single crossing under an ex-post zero correlation result: the monotonicity of coverage as a function of riskiness. Since by controlling for risk aversion (no asymmetric informat, ion), coverage is a monotone function of riskiness, this also gives a test for asymmetric information. Finally, we relate this theoretical results to empirica! tests in the recent literature, specially the Dionne, Gouriéroux and Vanasse (2001) work. In particular, they found an empirical evidence that seems to be compatible with asymmetric information and non single crossing in our framework. More generally, we build a hidden information model showing how omitted variabIes (asymmetric information) can bias the sign of the correlation of equilibrium variabIes conditioning on ali observabIe variabIes. We show that this may be t,he case when the omitted variabIes have a non-monotonic reIation with t,he observable ones. Moreover, because this non-monotonic reIat,ion is deepIy reIated with the failure of the SCP in one-dimensional screening problems, the existing lit.erature on asymmetric information does not capture t,his feature. Hence, our main result is to point Out the importance of t,he SCP in testing predictions of the hidden information models.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

With the constant grow of enterprises and the need to share information across departments and business areas becomes more critical, companies are turning to integration to provide a method for interconnecting heterogeneous, distributed and autonomous systems. Whether the sales application needs to interface with the inventory application, the procurement application connect to an auction site, it seems that any application can be made better by integrating it with other applications. Integration between applications can face several troublesome due the fact that applications may not have been designed and implemented having integration in mind. Regarding to integration issues, two tier software systems, composed by the database tier and by the “front-end” tier (interface), have shown some limitations. As a solution to overcome the two tier limitations, three tier systems were proposed in the literature. Thus, by adding a middle-tier (referred as middleware) between the database tier and the “front-end” tier (or simply referred application), three main benefits emerge. The first benefit is related with the fact that the division of software systems in three tiers enables increased integration capabilities with other systems. The second benefit is related with the fact that any modifications to the individual tiers may be carried out without necessarily affecting the other tiers and integrated systems and the third benefit, consequence of the others, is related with less maintenance tasks in software system and in all integrated systems. Concerning software development in three tiers, this dissertation focus on two emerging technologies, Semantic Web and Service Oriented Architecture, combined with middleware. These two technologies blended with middleware, which resulted in the development of Swoat framework (Service and Semantic Web Oriented ArchiTecture), lead to the following four synergic advantages: (1) allow the creation of loosely-coupled systems, decoupling the database from “front-end” tiers, therefore reducing maintenance; (2) the database schema is transparent to “front-end” tiers which are aware of the information model (or domain model) that describes what data is accessible; (3) integration with other heterogeneous systems is allowed by providing services provided by the middleware; (4) the service request by the “frontend” tier focus on ‘what’ data and not on ‘where’ and ‘how’ related issues, reducing this way the application development time by developers.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Pós-graduação em Pesquisa e Desenvolvimento (Biotecnologia Médica) - FMB

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Post-Occupancy Evaluation of buildings considers user satisfaction, building pathologies and performance, as well as users interventions in the built environment. The possibility of identifying interventions regardless of the user could provide a significant gain in the efficiency of Post-Occupancy Evaluations. We foresee the application of Augmented Reality (AR) to improve the identification of renovations by overlapping the construction information model with an image of the actual building. This article validates the use of AR on existing smartphone and tablet applications. This study proposes the incorporation of AR into the planning, execution and application of Post-Occupancy Evaluation. For the planning, this study proposes the development of a new research tool. With regards to the execution, this study examined the data collection conditions on site through the visualization of overlapping models. For the application, this study proposes displaying the results through the use of RA information layers. The transparency oft he RA model was used to allow comparison between the virtual model and the real model. The development and adaptation of the virtual model and the solution developed for the experiment of the RA proposal are presented and discussed. The experiment points to shortcomings that still make the proposed technological solution unfeasible.