774 resultados para Information privacy Framework
Resumo:
Alison Macrina is the founder and director of the Library Freedom Project, an initiative that aims to make real the promise of intellectual freedom in libraries. The Library Freedom Project trains librarians on the state of global surveillance, privacy rights, and privacy-protecting technology, so that librarians may in turn teach their communities about safeguarding privacy. In 2015, Alison was named one of Library Journal‘s Movers and Shakers. Read more about the Library Freedom Project at libraryfreedomproject.org.
Resumo:
Each year search engines like Google, Bing and Yahoo, complete trillions of search queries online. Students are especially dependent on these search tools because of their popularity, convenience and accessibility. However, what students are unaware of, by choice or naiveté is the amount of personal information that is collected during each search session, how that data is used and who is interested in their online behavior profile. Privacy policies are frequently updated in favor of the search companies but are lengthy and often are perused briefly or ignored entirely with little thought about how personal web habits are being exploited for analytics and marketing. As an Information Literacy instructor, and a member of the Electronic Frontier Foundation, I believe in the importance of educating college students and web users in general that they have a right to privacy online. Class discussions on the topic of web privacy have yielded an interesting perspective on internet search usage. Students are unaware of how their online behavior is recorded and have consistently expressed their hesitancy to use tools that disguise or delete their IP address because of the stigma that it may imply they have something to hide or are engaging in illegal activity. Additionally, students fear they will have to surrender the convenience of uber connectivity in their applications to maintain their privacy. The purpose of this lightning presentation is to provide educators with a lesson plan highlighting and simplifying the privacy terms for the three major search engines, Google, Bing and Yahoo. This presentation focuses on what data these search engines collect about users, how that data is used and alternative search solutions, like DuckDuckGo, for increased privacy. Students will directly benefit from this lesson because informed internet users can protect their data, feel safer online and become more effective web searchers.
Resumo:
In the past few years, libraries have started to design public programs that educate patrons about different tools and techniques to protect personal privacy. But do end user solutions provide adequate safeguards against surveillance by corporate and government actors? What does a comprehensive plan for privacy entail in order that libraries live up to their privacy values? In this paper, the authors discuss the complexity of surveillance architecture that the library institution might confront when seeking to defend the privacy rights of patrons. This architecture consists of three main parts: physical or material aspects, logical characteristics, and social factors of information and communication flows in the library setting. For each category, the authors will present short case studies that are culled from practitioner experience, research, and public discourse. The case studies probe the challenges faced by the library—not only when making hardware and software choices, but also choices related to staffing and program design. The paper shows that privacy choices intersect not only with free speech and chilling effects, but also with questions that concern intellectual property, organizational development, civic engagement, technological innovation, public infrastructure, and more. The paper ends with discussion of what libraries will require in order to sustain and improve efforts to serve as stewards of privacy in the 21st century.
Resumo:
In recent decades, library associations have advocated for the adoption of privacy and confidentiality policies as practical support to the Library Code of Ethics with a threefold purpose to (1) define and uphold privacy practices within the library, (2) convey privacy practices to patrons and, (3) protect against potential liability and public relations problems. The adoption of such policies has been instrumental in providing libraries with effective responses to surveillance initiatives such as warrantless requests and the USA PATRIOT ACT. Nevertheless, as reflected in recent news stories, the rapid emergence of data brokerage relationships and technologies and the increasing need for libraries to utilize third party vendor services have increased opportunities for data surveillers to access patrons’ personal information and reading habits, which are funneled and made available through multiple online library service platforms. Additionally, the advice that libraries should “contract for the same level of privacy reflected in their privacy policies” is no longer realistic given that the existence of multiple vendor contracts negotiated at arms length is likely to produce varying privacy terms and even varying definitions of what constitutes personal information (PII). These conditions sharply threaten the effectiveness and relevance of library privacy policies and privacy initiatives in that such policies increasingly offer false comfort by failing to reflect privacy weaknesses in the data sharing landscape and vendor contracts when library-vendor contracts fail to keep up with vendor data sharing capabilities. While some argue that library privacy ethics are antiquated and rendered obscure in the current online sharing economy PEW studies point to pronounced public discomfort with increasing privacy erosion. At the same time, new directions in FTC enforcement raise the possibility that public institutions’ privacy policies may serve as swords to unfair or deceptive commercial trade practices – offering the potential of renewed relevance for library privacy and confidentiality policies. This dual coin of public concern and the potential for enhanced FTC enforcement suggests that when crafting privacy polices libraries must now walk the knife’s edge by offering patrons both realistic notice about the limitations of protections the library can ensure while at the same time publicly holding vendors accountable to library privacy ethics and expectations. Potential solutions for how to walk this edge are developed and offered as a subject for further discussion to assist the modification of model policies for both public and academic libraries alike.
Resumo:
New business and technology platforms are required to sustainably manage urban water resources [1,2]. However, any proposed solutions must be cognisant of security, privacy and other factors that may inhibit adoption and hence impact. The FP7 WISDOM project (funded by the European Commission - GA 619795) aims to achieve a step change in water and energy savings via the integration of innovative Information and Communication Technologies (ICT) frameworks to optimize water distribution networks and to enable change in consumer behavior through innovative demand management and adaptive pricing schemes [1,2,3]. The WISDOM concept centres on the integration of water distribution, sensor monitoring and communication systems coupled with semantic modelling (using ontologies, potentially connected to BIM, to serve as intelligent linkages throughout the entire framework) and control capabilities to provide for near real-time management of urban water resources. Fundamental to this framework are the needs and operational requirements of users and stakeholders at domestic, corporate and city levels and this requires the interoperability of a number of demand and operational models, fed with data from diverse sources such as sensor networks and crowsourced information. This has implications regarding the provenance and trustworthiness of such data and how it can be used in not only the understanding of system and user behaviours, but more importantly in the real-time control of such systems. Adaptive and intelligent analytics will be used to produce decision support systems that will drive the ability to increase the variability of both supply and consumption [3]. This in turn paves the way for adaptive pricing incentives and a greater understanding of the water-energy nexus. This integration is complex and uncertain yet being typical of a cyber-physical system, and its relevance transcends the water resource management domain. The WISDOM framework will be modeled and simulated with initial testing at an experimental facility in France (AQUASIM – a full-scale test-bed facility to study sustainable water management), then deployed and evaluated in in two pilots in Cardiff (UK) and La Spezia (Italy). These demonstrators will evaluate the integrated concept providing insight for wider adoption.
Resumo:
In this paper we analyze the optimality of allowing firms to observe signals of workers’ characteristics in an optimal taxation framework. We show that it is always optimal to prohibit signals that disclose information about differences in the intrinsic productivities of workers like mandatory health exams and IQ tests, for example. On the other hand, it is never optimal to forbid signals that reveal information about the comparative advantages of workers like their specialization and profession. When signals are mixed (they disclose both types of information), there is a trade-off between efficiency and equity. It is optimal to prohibit signals with sufficiently low comparative advantage content.
Resumo:
We extend the macroeconomic literature on Sstype rules by introducing infrequent information in a kinked ad justment cost model. We first show that optimal individual decision rules are both state-and -time dependent. We then develop an aggregation framework to study the macroeconomic implications of such optimal individual decision rules. In our model, a vast number of agents act together, and more so when uncertainty is large.The average effect of an aggregate shock is inversely related to its size and to aggregate uncertainty. These results are in contrast with those obtained with full information ad justment cost models.
Resumo:
The work described in this thesis aims to support the distributed design of integrated systems and considers specifically the need for collaborative interaction among designers. Particular emphasis was given to issues which were only marginally considered in previous approaches, such as the abstraction of the distribution of design automation resources over the network, the possibility of both synchronous and asynchronous interaction among designers and the support for extensible design data models. Such issues demand a rather complex software infrastructure, as possible solutions must encompass a wide range of software modules: from user interfaces to middleware to databases. To build such structure, several engineering techniques were employed and some original solutions were devised. The core of the proposed solution is based in the joint application of two homonymic technologies: CAD Frameworks and object-oriented frameworks. The former concept was coined in the late 80's within the electronic design automation community and comprehends a layered software environment which aims to support CAD tool developers, CAD administrators/integrators and designers. The latter, developed during the last decade by the software engineering community, is a software architecture model to build extensible and reusable object-oriented software subsystems. In this work, we proposed to create an object-oriented framework which includes extensible sets of design data primitives and design tool building blocks. Such object-oriented framework is included within a CAD Framework, where it plays important roles on typical CAD Framework services such as design data representation and management, versioning, user interfaces, design management and tool integration. The implemented CAD Framework - named Cave2 - followed the classical layered architecture presented by Barnes, Harrison, Newton and Spickelmier, but the possibilities granted by the use of the object-oriented framework foundations allowed a series of improvements which were not available in previous approaches: - object-oriented frameworks are extensible by design, thus this should be also true regarding the implemented sets of design data primitives and design tool building blocks. This means that both the design representation model and the software modules dealing with it can be upgraded or adapted to a particular design methodology, and that such extensions and adaptations will still inherit the architectural and functional aspects implemented in the object-oriented framework foundation; - the design semantics and the design visualization are both part of the object-oriented framework, but in clearly separated models. This allows for different visualization strategies for a given design data set, which gives collaborating parties the flexibility to choose individual visualization settings; - the control of the consistency between semantics and visualization - a particularly important issue in a design environment with multiple views of a single design - is also included in the foundations of the object-oriented framework. Such mechanism is generic enough to be also used by further extensions of the design data model, as it is based on the inversion of control between view and semantics. The view receives the user input and propagates such event to the semantic model, which evaluates if a state change is possible. If positive, it triggers the change of state of both semantics and view. Our approach took advantage of such inversion of control and included an layer between semantics and view to take into account the possibility of multi-view consistency; - to optimize the consistency control mechanism between views and semantics, we propose an event-based approach that captures each discrete interaction of a designer with his/her respective design views. The information about each interaction is encapsulated inside an event object, which may be propagated to the design semantics - and thus to other possible views - according to the consistency policy which is being used. Furthermore, the use of event pools allows for a late synchronization between view and semantics in case of unavailability of a network connection between them; - the use of proxy objects raised significantly the abstraction of the integration of design automation resources, as either remote or local tools and services are accessed through method calls in a local object. The connection to remote tools and services using a look-up protocol also abstracted completely the network location of such resources, allowing for resource addition and removal during runtime; - the implemented CAD Framework is completely based on Java technology, so it relies on the Java Virtual Machine as the layer which grants the independence between the CAD Framework and the operating system. All such improvements contributed to a higher abstraction on the distribution of design automation resources and also introduced a new paradigm for the remote interaction between designers. The resulting CAD Framework is able to support fine-grained collaboration based on events, so every single design update performed by a designer can be propagated to the rest of the design team regardless of their location in the distributed environment. This can increase the group awareness and allow a richer transfer of experiences among them, improving significantly the collaboration potential when compared to previously proposed file-based or record-based approaches. Three different case studies were conducted to validate the proposed approach, each one focusing one a subset of the contributions of this thesis. The first one uses the proxy-based resource distribution architecture to implement a prototyping platform using reconfigurable hardware modules. The second one extends the foundations of the implemented object-oriented framework to support interface-based design. Such extensions - design representation primitives and tool blocks - are used to implement a design entry tool named IBlaDe, which allows the collaborative creation of functional and structural models of integrated systems. The third case study regards the possibility of integration of multimedia metadata to the design data model. Such possibility is explored in the frame of an online educational and training platform.
Resumo:
A coleta e o armazenamento de dados em larga escala, combinados à capacidade de processamento de dados que não necessariamente tenham relação entre si de forma a gerar novos dados e informações, é uma tecnologia amplamente usada na atualidade, conhecida de forma geral como Big Data. Ao mesmo tempo em que possibilita a criação de novos produtos e serviços inovadores, os quais atendem a demandas e solucionam problemas de diversos setores da sociedade, o Big Data levanta uma série de questionamentos relacionados aos direitos à privacidade e à proteção dos dados pessoais. Esse artigo visa proporcionar um debate sobre o alcance da atual proteção jurídica aos direitos à privacidade e aos dados pessoais nesse contexto, e consequentemente fomentar novos estudos sobre a compatibilização dos mesmos com a liberdade de inovação. Para tanto, abordará, em um primeiro momento, pontos positivos e negativos do Big Data, identificando como o mesmo afeta a sociedade e a economia de forma ampla, incluindo, mas não se limitando, a questões de consumo, saúde, organização social, administração governamental, etc. Em seguida, serão identificados os efeitos dessa tecnologia sobre os direitos à privacidade e à proteção dos dados pessoais, tendo em vista que o Big Data gera grandes mudanças no que diz respeito ao armazenamento e tratamento de dados. Por fim, será feito um mapeamento do atual quadro regulatório brasileiro de proteção a tais direitos, observando se o mesmo realmente responde aos desafios atuais de compatibilização entre inovação e privacidade.
Resumo:
There is substantial empirical evidence that parental bequests to their children are typically equal in the US – a regularity inconsistent with the predictions of standard optimizing bequest models. The prior explanation for this puzzle is parents’ desire to signal equal affection given children’s incomplete information of parental preferences. However, parents also have incomplete information regarding children and the implications of this side of the information set have not previously been considered. Using a strategic bequest framework we show that when parents have sufficient uncertainty regarding children’s returns to relocation a separating equilibrium in which parents reward attentive heirs with larger bequests is precluded. We argue that such uncertainty is consistent with conditions in the contemporary US.
Resumo:
Real exchange rate is an important macroeconomic price in the economy and a ects economic activity, interest rates, domestic prices, trade and investiments ows among other variables. Methodologies have been developed in empirical exchange rate misalignment studies to evaluate whether a real e ective exchange is overvalued or undervalued. There is a vast body of literature on the determinants of long-term real exchange rates and on empirical strategies to implement the equilibrium norms obtained from theoretical models. This study seeks to contribute to this literature by showing that it is possible to calculate the misalignment from a mixed ointegrated vector error correction framework. An empirical exercise using United States' real exchange rate data is performed. The results suggest that the model with mixed frequency data is preferred to the models with same frequency variables
Resumo:
Na moderna Economia do Conhecimento, na Era do Big Data, entender corretamente o uso e a gestão da Tecnologia de Informação e Comunicação (TIC) tendo como base o campo acadêmico de estudos de Sistemas de Informação (SI), torna-se cada vez mais relevante e estratégico para as organizações que pretendem: permanecer em atividade, estar aptas para atender novas demandas (internas e externas) e enfrentar as complexas mudanças na competição de mercado. Esta pesquisa utiliza a teoria dos estágios de crescimento, fundamentada pelos estudos de Richard L. Nolan nos anos 70. A literatura acadêmica relacionada com modelos de estágios de crescimento e o contexto do campo de estudo de SI, fornecem as bases conceituais deste estudo. A pesquisa identifica um modelo com seus construtos relacionados aos estágios de crescimento das iniciativas da TIC/SI organizacional, partindo das variáveis de benchmark de segundo nível de Nolan, e propõe sua operacionalização com a criação e desenvolvimento de uma escala. De caráter exploratório e descritivo, a pesquisa traz contribuição teórica ao paradigma da teoria dos estágios de crescimento, adicionando um novo processo de crescimento em sua estrutura conceitual. Como resultado, é disponibilizado além de um instrumento de escala bilíngue (português e inglês), recomendações e regras para aplicação de um instrumento de pesquisa do tipo survey, na continuidade deste estudo. Como implicação geral desta pesquisa, é esperado que seu uso e aplicação ao mensurar a avaliação do nível de estágio da TIC/SI em organizações, possam auxiliar dois perfis de indivíduos: acadêmicos que estudam essa temática, assim como, profissionais que buscam respostas de suas ações práticas nas organizações onde trabalham.
Resumo:
Orientador: António Jorge Cardoso
Resumo:
Develop software is still a risky business. After 60 years of experience, this community is still not able to consistently build Information Systems (IS) for organizations with predictable quality, within previously agreed budget and time constraints. Although software is changeable we are still unable to cope with the amount and complexity of change that organizations demand for their IS. To improve results, developers followed two alternatives: Frameworks that increase productivity but constrain the flexibility of possible solutions; Agile ways of developing software that keep flexibility with less upfront commitments. With strict frameworks, specific hacks have to be put in place to get around the framework construction options. In time this leads to inconsistent architectures that are harder to maintain due to incomplete documentation and human resources turnover. The main goals of this work is to create a new way to develop flexible IS for organizations, using web technologies, in a faster, better and cheaper way that is more suited to handle organizational change. To do so we propose an adaptive object model that uses a new ontology for data and action with strict normalizing rules. These rules should bound the effects of changes that can be better tested and therefore corrected. Interfaces are built with templates of resources that can be reused and extended in a flexible way. The “state of the world” for each IS is determined by all production and coordination acts that agents performed over time, even those performed by external systems. When bugs are found during maintenance, their past cascading effects can be checked through simulation, re-running the log of transaction acts over time and checking results with previous records. This work implements a prototype with part of the proposed system in order to have a preliminary assessment its feasibility and limitations.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)