981 resultados para information gap
Resumo:
According to a recent Eurobarometer survey (2014), 68% of Europeans tend not to trust national governments. As the increasing alienation of citizens from politics endangers democracy and welfare, governments, practitioners and researchers look for innovative means to engage citizens in policy matters. One of the measures intended to overcome the so-called democratic deficit is the promotion of civic participation. Digital media proliferation offers a set of novel characteristics related to interactivity, ubiquitous connectivity, social networking and inclusiveness that enable new forms of societal-wide collaboration with a potential impact on leveraging participative democracy. Following this trend, e-Participation is an emerging research area that consists in the use of Information and Communication Technologies to mediate and transform the relations among citizens and governments towards increasing citizens’ participation in public decision-making. However, despite the widespread efforts to implement e-Participation through research programs, new technologies and projects, exhaustive studies on the achieved outcomes reveal that it has not yet been successfully incorporated in institutional politics. Given the problems underlying e-Participation implementation, the present research suggested that, rather than project-oriented efforts, the cornerstone for successfully implementing e-Participation in public institutions as a sustainable added-value activity is a systematic organisational planning, embodying the principles of open-governance and open-engagement. It further suggested that BPM, as a management discipline, can act as a catalyst to enable the desired transformations towards value creation throughout the policy-making cycle, including political, organisational and, ultimately, citizen value. Following these findings, the primary objective of this research was to provide an instrumental model to foster e-Participation sustainability across Government and Public Administration towards a participatory, inclusive, collaborative and deliberative democracy. The developed artefact, consisting in an e-Participation Organisational Semantic Model (ePOSM) underpinned by a BPM-steered approach, introduces this vision. This approach to e-Participation was modelled through a semi-formal lightweight ontology stack structured in four sub-ontologies, namely e-Participation Strategy, Organisational Units, Functions and Roles. The ePOSM facilitates e-Participation sustainability by: (1) Promoting a common and cross-functional understanding of the concepts underlying e-Participation implementation and of their articulation that bridges the gap between technical and non-technical users; (2) Providing an organisational model which allows a centralised and consistent roll-out of strategy-driven e-Participation initiatives, supported by operational units dedicated to the execution of transformation projects and participatory processes; (3) Providing a standardised organisational structure, goals, functions and roles related to e-Participation processes that enhances process-level interoperability among government agencies; (4) Providing a representation usable in software development for business processes’ automation, which allows advanced querying using a reasoner or inference engine to retrieve concrete and specific information about the e-Participation processes in place. An evaluation of the achieved outcomes, as well a comparative analysis with existent models, suggested that this innovative approach tackling the organisational planning dimension can constitute a stepping stone to harness e-Participation value.
Resumo:
AbstractINTRODUCTION:We present a review of injuries in humans caused by aquatic animals in Brazil using the Information System for Notifiable Diseases [ Sistema de Informação de Agravos de Notificação (SINAN)] database.METHODS:A descriptive and retrospective epidemiological study was conducted from 2007 to 2013.RESULTS:A total of 4,118 accidents were recorded. Of these accidents, 88.7% (3,651) were caused by venomous species, and 11.3% (467) were caused by poisonous, traumatic or unidentified aquatic animals. Most of the events were injuries by stingrays (69%) and jellyfish (13.1%). The North region was responsible for the majority of reports (66.2%), with a significant emphasis on accidents caused by freshwater stingrays (92.2% or 2,317 cases). In the South region, the region with the second highest number of records (15.7%), jellyfish caused the majority of accidents (83.7% or 452 cases). The Northeastern region, with 12.5% of the records, was notable because almost all accidents were caused by toadfish (95.6% or 174 cases).CONCLUSIONS:Although a comparison of different databases has not been performed, the data presented in this study, compared to local and regional surveys, raises the hypothesis of underreporting of accidents. As the SINAN is the official system for the notification of accidents by venomous animals in Brazil, it is imperative that its operation be reviewed and improved, given that effective measures to prevent accidents by venomous animals depend on a reliable database and the ability to accurately report the true conditions.
Resumo:
Based in internet growth, through semantic web, together with communication speed improvement and fast development of storage device sizes, data and information volume rises considerably every day. Because of this, in the last few years there has been a growing interest in structures for formal representation with suitable characteristics, such as the possibility to organize data and information, as well as the reuse of its contents aimed for the generation of new knowledge. Controlled Vocabulary, specifically Ontologies, present themselves in the lead as one of such structures of representation with high potential. Not only allow for data representation, as well as the reuse of such data for knowledge extraction, coupled with its subsequent storage through not so complex formalisms. However, for the purpose of assuring that ontology knowledge is always up to date, they need maintenance. Ontology Learning is an area which studies the details of update and maintenance of ontologies. It is worth noting that relevant literature already presents first results on automatic maintenance of ontologies, but still in a very early stage. Human-based processes are still the current way to update and maintain an ontology, which turns this into a cumbersome task. The generation of new knowledge aimed for ontology growth can be done based in Data Mining techniques, which is an area that studies techniques for data processing, pattern discovery and knowledge extraction in IT systems. This work aims at proposing a novel semi-automatic method for knowledge extraction from unstructured data sources, using Data Mining techniques, namely through pattern discovery, focused in improving the precision of concept and its semantic relations present in an ontology. In order to verify the applicability of the proposed method, a proof of concept was developed, presenting its results, which were applied in building and construction sector.
Resumo:
In the early nineties, Mark Weiser wrote a series of seminal papers that introduced the concept of Ubiquitous Computing. According to Weiser, computers require too much attention from the user, drawing his focus from the tasks at hand. Instead of being the centre of attention, computers should be so natural that they would vanish into the human environment. Computers become not only truly pervasive but also effectively invisible and unobtrusive to the user. This requires not only for smaller, cheaper and low power consumption computers, but also for equally convenient display solutions that can be harmoniously integrated into our surroundings. With the advent of Printed Electronics, new ways to link the physical and the digital worlds became available. By combining common printing techniques such as inkjet printing with electro-optical functional inks, it is starting to be possible not only to mass-produce extremely thin, flexible and cost effective electronic circuits but also to introduce electronic functionalities into products where it was previously unavailable. Indeed, Printed Electronics is enabling the creation of novel sensing and display elements for interactive devices, free of form factor. At the same time, the rise in the availability and affordability of digital fabrication technologies, namely of 3D printers, to the average consumer is fostering a new industrial (digital) revolution and the democratisation of innovation. Nowadays, end-users are already able to custom design and manufacture on demand their own physical products, according to their own needs. In the future, they will be able to fabricate interactive digital devices with user-specific form and functionality from the comfort of their homes. This thesis explores how task-specific, low computation, interactive devices capable of presenting dynamic visual information can be created using Printed Electronics technologies, whilst following an approach based on the ideals behind Personal Fabrication. Focus is given on the use of printed electrochromic displays as a medium for delivering dynamic digital information. According to the architecture of the displays, several approaches are highlighted and categorised. Furthermore, a pictorial computation model based on extended cellular automata principles is used to programme dynamic simulation models into matrix-based electrochromic displays. Envisaged applications include the modelling of physical, chemical, biological, and environmental phenomena.
Resumo:
Equity research report
Resumo:
Flow of new information is what produces price changes, understanding if the market is unbalanced is fundamental to know how much inventory market makers should keep during an important economic release. After identifying which economic indicators impact the S&P and 10 year Treasuries. The Volume Synchronized Probability of Information-Based Trading (VPIN) will be used as a predictability measure. The results point to some predictability power over economic surprises of the VPIN metric, mainly when calculated using the S&P. This finding appears to be supported when analysing depth imbalance before economic releases. Inferior results were achieved when using treasuries. The final aim of this study is to fill the gap between microstructural changes and macroeconomic events.
Resumo:
PURPOSE: Patients preparing to undergo surgery should not suffer needless anxiety. This study aimed to evaluate anxiety levels on the day before surgery as related to the information known by the patient regarding the diagnosis, surgical procedure, or anesthesia. METHOD: Patients reported their knowledge of diagnosis, surgery, and anesthesia. The Spielberger State-Trait Anxiety Inventory (STAI) was used to measure patient anxiety levels. RESULTS: One hundred and forty-nine patients were selected, and 82 females and 38 males were interviewed. Twenty-nine patients were excluded due to illiteracy. The state-anxiety levels were alike for males and females (36.10 ± 11.94 vs. 37.61 ± 8.76) (mean ± SD). Trait-anxiety levels were higher for women (42.55 ± 10.39 vs. 38.08 ± 12.25, P = 0.041). Patient education level did not influence the state-anxiety level but was inversely related to the trait-anxiety level. Knowledge of the diagnosis was clear for 91.7% of patients, of the surgery for 75.0%, and of anesthesia for 37.5%. Unfamiliarity with the surgical procedure raised state-anxiety levels (P = 0.021). A lower state-anxiety level was found among patients who did not know the diagnosis but knew about the surgery (P = 0.038). CONCLUSIONS: Increased knowledge of patients regarding the surgery they are about to undergo may reduce their state-anxiety levels.
Resumo:
This work presents research conducted to understand the role of indicators in decisions of technology innovation. A gap was detected in the literature of innovation and technology assessment about the use and influence of indicators in this type of decision. It was important to address this gap because indicators are often frequent elements of innovation and technology assessment studies. The research was designed to determine the extent of the use and influence of indicators in decisions of technology innovation, to characterize the role of indicators in these decisions, and to understand how indicators are used in these decisions. The latter involved the test of four possible explanatory factors: the type and phase of decision, and the context and process of construction of evidence. Furthermore, it focused on three Portuguese innovation groups: public researchers, business R&D&I leaders and policymakers. The research used a combination of methods to collect quantitative and qualitative information, such as surveys, case studies and social network analysis. This research concluded that the use of indicators is different from their influence in decisions of technology innovation. In fact, there is a high use of indicators in these decisions, but lower and differentiated differences in their influence in each innovation group. This suggests that political-behavioural methods are also involved in the decisions to different degrees. The main social influences in the decisions came mostly from hierarchies, knowledge-based contacts and users. Furthermore, the research established that indicators played mostly symbolic roles in decisions of policymakers and business R&D&I leaders, although their role with researchers was more differentiated. Indicators were also described as helpful instruments to conduct a reasonable interpretation of data and to balance options in innovation and technology assessments studies, in particular when contextualised, described in detail and with discussion upon the options made. Results suggest that there are four main explanatory factors for the role of indicators in these decisions: First, the type of decision appears to be a factor to consider when explaining the role of indicators. In fact, each type of decision had different influences on the way indicators are used, and each type of decision used different types of indicators. Results for policy-making were particularly different from decisions of acquisition and development of products/technology. Second, the phase of the decision can help to understand the role indicators play in these decisions. Results distinguished between two phases detected in all decisions – before and after the decision – as well as two other phases that can be used to complement the decision process and where indicators can be involved. Third, the context of decision is an important factor to consider when explaining the way indicators are taken into consideration in policy decisions. In fact, the role of indicators can be influenced by the particular context of the decision maker, in which all types of evidence can be selected or downplayed. More importantly, the use of persuasive analytical evidence appears to be related with the dispute existent in the policy context. Fourth and last, the process of construction of evidence is a factor to consider when explaining the way indicators are involved in these decisions. In fact, indicators and other evidence were brought to the decision processes according to their availability and capacity to support the different arguments and interests of the actors and stakeholders. In one case, an indicator lost much persuasion strength with the controversies that it went through during the decision process. Therefore, it can be argued that the use of indicators is high but not very influential; their role is mostly symbolic to policymakers and business decisions, but varies among researchers. The role of indicators in these decisions depends on the type and phase of the decision and the context and process of construction of evidence. The latter two are related to the particular context of each decision maker, the existence of elements of dispute and controversies that influence the way indicators are introduced in the decision-making process.
Resumo:
This thesis examines the effects of macroeconomic factors on inflation level and volatility in the Euro Area to improve the accuracy of inflation forecasts with econometric modelling. Inflation aggregates for the EU as well as inflation levels of selected countries are analysed, and the difference between these inflation estimates and forecasts are documented. The research proposes alternative models depending on the focus and the scope of inflation forecasts. I find that models with a Generalized AutoRegressive Conditional Heteroskedasticity (GARCH) in mean process have better explanatory power for inflation variance compared to the regular GARCH models. The significant coefficients are different in EU countries in comparison to the aggregate EU-wide forecast of inflation. The presence of more pronounced GARCH components in certain countries with more stressed economies indicates that inflation volatility in these countries are likely to occur as a result of the stressed economy. In addition, other economies in the Euro Area are found to exhibit a relatively stable variance of inflation over time. Therefore, when analysing EU inflation one have to take into consideration the large differences on country level and focus on those one by one.
Resumo:
Information systems are widespread and used by anyone with computing devices as well as corporations and governments. It is often the case that security leaks are introduced during the development of an application. Reasons for these security bugs are multiple but among them one can easily identify that it is very hard to define and enforce relevant security policies in modern software. This is because modern applications often rely on container sharing and multi-tenancy where, for instance, data can be stored in the same physical space but is logically mapped into different security compartments or data structures. In turn, these security compartments, to which data is classified into in security policies, can also be dynamic and depend on runtime data. In this thesis we introduce and develop the novel notion of dependent information flow types, and focus on the problem of ensuring data confidentiality in data-centric software. Dependent information flow types fit within the standard framework of dependent type theory, but, unlike usual dependent types, crucially allow the security level of a type, rather than just the structural data type itself, to depend on runtime values. Our dependent function and dependent sum information flow types provide a direct, natural and elegant way to express and enforce fine grained security policies on programs. Namely programs that manipulate structured data types in which the security level of a structure field may depend on values dynamically stored in other fields The main contribution of this work is an efficient analysis that allows programmers to verify, during the development phase, whether programs have information leaks, that is, it verifies whether programs protect the confidentiality of the information they manipulate. As such, we also implemented a prototype typechecker that can be found at http://ctp.di.fct.unl.pt/DIFTprototype/.
Resumo:
Geochemical and geochronological analyses of samples of surficial Acre Basin sediments and fossils indicate an extensive fluvial-lacustrine system, occupying this region, desiccated slowly during the last glacial cycle (LGC). This research documents direct evidence for aridity in western Amazonia during the LGC and is important in establishing boundary conditions for LGC climate models as well as in correlating marine and continental (LGC) climate conditions.