911 resultados para Government information.
Resumo:
The strategic management of information plays a fundamental role in the organizational management process since the decision-making process depend on the need for survival in a highly competitive market. Companies are constantly concerned about information transparency and good practices of corporate governance (CG) which, in turn, directs relations between the controlling power of the company and investors. In this context, this article presents the relationship between the disclosing of information of joint-stock companies by means of using XBRL, the open data model adopted by the Brazilian government, a model that boosted the publication of Information Access Law (Lei de Acesso à Informação), nº 12,527 of 18 November 2011. Information access should be permeated by a mediation policy in order to subsidize the knowledge construction and decision-making of investors. The XBRL is the main model for the publishing of financial information. The use of XBRL by means of new semantic standard created for Linked Data, strengthens the information dissemination, as well as creates analysis mechanisms and cross-referencing of data with different open databases available on the Internet, providing added value to the data/information accessed by civil society.
Resumo:
The aim of this article is to discuss whether public procurement policy can promote innovation by firms located in developing countries. The literature on technological learning is used to create a typology for assessing the impact of public procurement in developing countries from the standpoint of innovation. Petrobras, a Brazilian state-owned enterprise, was chosen as a case study. Petrobras is a global leader in the field of deepwater oil production technology and so offers an interesting opportunity to investigate whether government procurement in developing countries is used to promote the capability of domestic firms to develop innovations. The article presents the findings of a field survey on P-51, a platform that was ordered by the Brazilian state-owned enterprise and began producing in 2009. The case study is based on information collected from interviews with managers of Petrobras, EPC contractors and some of the firms subcontracted to work on P-51.
Resumo:
The term “user study” focuses on information use patterns, information needs, and information-seeking behaviour. Information- seeking behaviour and information access patterns are areas of active interest among librarians and information scientists. This article reports on a study of the information requirements, usefulness of library resources and services, and problems encountered by faculty members of two arts and science colleges, Government Arts & Science College and Sri Raghavendra Arts & Science College, Chidambaram.
Resumo:
The teacher-librarian and organization of private secondary school libraries in Ondo West Local Government Area of Ondo State was the focus of the study. A structured questionnaire was the instrument used for data collection. Copies of questionnaires were administered to staff of six school libraries surveyed. The study revealed that none of the staff were professionally qualified, which resulted in poor and haphazard organization of the resources in all the schools surveyed. Recommendations were made to improve library services, including pr
Resumo:
Interoperability is a crucial issue for electronic government due to the need of agencies' information systems to be totally integrated and able to exchange data in a seamless way. A way to achieve it is by establishing a government interoperability framework (GIF). However, this is a difficult task to be carried out due not only to technological issues but also to other aspects. This research is expected to contribute to the identification of the barriers to the adoption of interoperability standards for electronic government. The article presents the preliminary findings from a case study of the Brazilian Government framework (e-PING), based on the analyses of documents and face-to-face interviews. It points out some aspects that may influence the establishment of these standards, becoming barriers to their adoption.
Resumo:
With the increasing production of information from e-government initiatives, there is also the need to transform a large volume of unstructured data into useful information for society. All this information should be easily accessible and made available in a meaningful and effective way in order to achieve semantic interoperability in electronic government services, which is a challenge to be pursued by governments round the world. Our aim is to discuss the context of e-Government Big Data and to present a framework to promote semantic interoperability through automatic generation of ontologies from unstructured information found in the Internet. We propose the use of fuzzy mechanisms to deal with natural language terms and present some related works found in this area. The results achieved in this study are based on the architectural definition and major components and requirements in order to compose the proposed framework. With this, it is possible to take advantage of the large volume of information generated from e-Government initiatives and use it to benefit society.
Resumo:
In January 2012, Poland witnessed massive protests, both in the streets and on the Internet, opposing ratification of the Anti-Counterfeiting Trade Agreement, which triggered a wave of strong anti-ACTA movements across Europe. In Poland, these protests had further far-reaching consequences, as they not only changed the initial position of the government on the controversial treaty but also actually started a public debate on the role of copyright law in the information society. Moreover, as a result of these events the Polish Ministry for Administration and Digitisation launched a round table, gathering various stakeholders to negotiate a potential compromise with regard to copyright law that would satisfy conflicting interests of various actors. This contribution will focus on a description of this massive resentment towards ACTA and a discussion of its potential reasons. Furthermore, the mechanisms that led to the extraordinary influence of the anti-ACTA movement on the governmental decisions in Poland will be analysed through the application of models and theories stemming from the social sciences. The importance of procedural justice in the copyright legislation process, especially its influence on the image of copyright law and obedience of its norms, will also be emphasised.
Resumo:
In the last two decades, trade liberalization under GATT/WTO has been partly offset by an increase in antidumping protection. Economists have argued convincingly that this is partly due to the inclusion of sales below cost in the definition of dumping during the GATT Tokyo Round. The introduction of the cost- based dumping definition gives regulating authorities a better opportunity to choose protection according to their liking. This paper investigates the domestic government's antidumping duty choice in an asymmetric information framework where the foreign firm's cost is observed by the domestic firm, but not by the government. To induce truthful revelation, the government can design a tariff schedule, contingent on firms' cost reports, accompanied by a threat to collect additional information for report verification (i.e., auditing) and, in case misreporting is detected, to set penalty duties. We show that depending on the concrete assumptions, the domestic government may not only be able to extract the true cost information, but also succeeds in implementing the full-information, governmental welfare-maximizing duty. In this case, the antidumping framework within GATT/WTO does not only offer the means to pursue strategic trade policy disguised as fair trade policy, but it also helps overcome the informational problems with regard to correctly determining the optimal strategic trade policy.
Resumo:
Purpose. To examine the association between living in proximity to Toxics Release Inventory (TRI) facilities and the incidence of childhood cancer in the State of Texas. ^ Design. This is a secondary data analysis utilizing the publicly available Toxics release inventory (TRI), maintained by the U.S. Environmental protection agency that lists the facilities that release any of the 650 TRI chemicals. Total childhood cancer cases and childhood cancer rate (age 0-14 years) by county, for the years 1995-2003 were used from the Texas cancer registry, available at the Texas department of State Health Services website. Setting: This study was limited to the children population of the State of Texas. ^ Method. Analysis was done using Stata version 9 and SPSS version 15.0. Satscan was used for geographical spatial clustering of childhood cancer cases based on county centroids using the Poisson clustering algorithm which adjusts for population density. Pictorial maps were created using MapInfo professional version 8.0. ^ Results. One hundred and twenty five counties had no TRI facilities in their region, while 129 facilities had at least one TRI facility. An increasing trend for number of facilities and total disposal was observed except for the highest category based on cancer rate quartiles. Linear regression analysis using log transformation for number of facilities and total disposal in predicting cancer rates was computed, however both these variables were not found to be significant predictors. Seven significant geographical spatial clusters of counties for high childhood cancer rates (p<0.05) were indicated. Binomial logistic regression by categorizing the cancer rate in to two groups (<=150 and >150) indicated an odds ratio of 1.58 (CI 1.127, 2.222) for the natural log of number of facilities. ^ Conclusion. We have used a unique methodology by combining GIS and spatial clustering techniques with existing statistical approaches in examining the association between living in proximity to TRI facilities and the incidence of childhood cancer in the State of Texas. Although a concrete association was not indicated, further studies are required examining specific TRI chemicals. Use of this information can enable the researchers and public to identify potential concerns, gain a better understanding of potential risks, and work with industry and government to reduce toxic chemical use, disposal or other releases and the risks associated with them. TRI data, in conjunction with other information, can be used as a starting point in evaluating exposures and risks. ^
Resumo:
The federal government is currently developing the Nationwide Health Information Network (NHIN). Described as a “network of networks,” the NHIN seeks to provide a nationwide, interoperable health information infrastructure that will securely connect consumers with those involved in health care. As part of the national health information technology (HIT) agenda, the NHIN aims to improve individual and population health by enabling health information to follow the consumer, be available for clinical decision-making, and support important public health measures such as biosurveillance. While the NHIN promises to improve clinical care to individuals and to reduce U.S. health care system costs overall, this electronic environment presents novel challenges for protecting individually identifiable health information. A major barrier to achieving public trust in the NHIN is the development of, and adherence to, a consistent and coordinated approach to privacy and security of health information. This paper will analyze the policy framework for electronic health information exchange with the NHIN. This exercise will demonstrate that the current policy is an effective framework for achieving effective biosurveillance with the NHIN. ^
Resumo:
The Internet, and specifically web 2.0 social media applications, offers an innovative method for communicating child health information to low-income parents. The main objective of this study was to use qualitative data to determine the value of using social media to reach low-income parents with child health information. A qualitative formative evaluation employing focus groups was used to determine the value of using social media for dissemination of child health information. Inclusion criteria included: (1) a parent with a child that attends a school in a designated Central Texas school district; and (2) English-speaking. The students who attend these schools are generally economically disadvantaged and are predominately Hispanic. The classic analysis strategy was used for data analysis. Focus group participants (n=19) were female (95%); White (53%), Hispanic (42%) or African American (5%); and received government assistance (63%). Most had access to the Internet (74%) and were likely to have low health literacy (53%). The most preferred source of child health information was the family pediatrician or general practitioner. Many participants were familiar with social media applications and had profiles on popular social networking sites, but used them infrequently. Objections to social media sites as sources of child health information included lack of credibility and parent time. Social media has excellent potential for reaching low-income parents when used as part of a multi-channel communication campaign. Further research should focus on the most effective type and format of messages that can promote behavior change in this population, such as story-telling. ^
Resumo:
At present, many countries allow citizens or entities to interact with the government outside the telematic environment through a legal representative who is granted powers of representation. However, if the interaction takes place through the Internet, only primitive mechanisms of representation are available, and these are mainly based on non-dynamic offline processes that do not enable quick and easy identity delegation. This paper proposes a system of dynamic delegation of identity between two generic entities that can solve the problem of delegated access to the telematic services provided by public authorities. The solution herein is based on the generation of a delegation token created from a proxy certificate that allows the delegating entity to delegate identity to another on the basis of a subset of its attributes as delegator, while also establishing in the delegation token itself restrictions on the services accessible to the delegated entity and the validity period of delegation. Further, the paper presents the mechanisms needed to either revoke a delegation token or to check whether a delegation token has been revoked. Implications for theory and practice and suggestions for future research are discussed.
Resumo:
The United Sates was founded on the principles of freedom. Events in recent history have threatened the freedoms we as individuals enjoy. Notably, changes to government legislation and policies regarding access to environmentally sensitive information following September 11, 2001, are troubling. The government has struggled with a difficult balancing act. The public has the right of access to information, yet, information some view as sensitive or dangerous must be kept out of the hands of terrorists. This project examines and discusses the information access debate within the United States and how to best provide the public environmentally sensitive information.
Resumo:
In this paper we explore the use of semantic classes in an existing information retrieval system in order to improve its results. Thus, we use two different ontologies of semantic classes (WordNet domain and Basic Level Concepts) in order to re-rank the retrieved documents and obtain better recall and precision. Finally, we implement a new method for weighting the expanded terms taking into account the weights of the original query terms and their relations in WordNet with respect to the new ones (which have demonstrated to improve the results). The evaluation of these approaches was carried out in the CLEF Robust-WSD Task, obtaining an improvement of 1.8% in GMAP for the semantic classes approach and 10% in MAP employing the WordNet term weighting approach.
Resumo:
Nowadays there is a big amount of biomedical literature which uses complex nouns and acronyms of biological entities thus complicating the task of retrieval specific information. The Genomics Track works for this goal and this paper describes the approach we used to take part of this track of TREC 2007. As this is the first time we participate in this track, we configurated a new system consisting of the following diferenciated parts: preprocessing, passage generation, document retrieval and passage (with the answer) extraction. We want to call special attention to the textual retrieval system used, which was developed by the University of Alicante. Adapting the resources for the propouse, our system has obtained precision results over the mean and median average of the 66 official runs for the Document, Aspect and Passage2 MAP; and in the case of Passage MAP we get nearly the median and mean value. We want to emphasize we have obtained these results without incorporating specific information about the domain of the track. For the future, we would like to further develop our system in this direction.