892 resultados para data privacy laws
Resumo:
The secretive 2011 Anti-Counterfeiting Trade Agreement – known in short by the catchy acronym ACTA – is a controversial trade pact designed to provide for stronger enforcement of intellectual property rights. The preamble to the treaty reads like pulp fiction – it raises moral panics about piracy, counterfeiting, organised crime, and border security. The agreement contains provisions on civil remedies and criminal offences; copyright law and trademark law; the regulation of the digital environment; and border measures. Memorably, Susan Sell called the international treaty a TRIPS Double-Plus Agreement, because its obligations far exceed those of the World Trade Organization's TRIPS Agreement 1994, and TRIPS-Plus Agreements, such as the Australia-United States Free Trade Agreement 2004. ACTA lacks the language of other international intellectual property agreements, which emphasise the need to balance the protection of intellectual property owners with the wider public interest in access to medicines, human development, and transfer of knowledge and technology. In Australia, there was much controversy both about the form and the substance of ACTA. While the Department of Foreign Affairs and Trade was a partisan supporter of the agreement, a wide range of stakeholders were openly critical. After holding hearings and taking note of the position of the European Parliament and the controversy in the United States, the Joint Standing Committee on Treaties in the Australian Parliament recommended the deferral of ratification of ACTA. This was striking as representatives of all the main parties agreed on the recommendation. The committee was concerned about the lack of transparency, due process, public participation, and substantive analysis of the treaty. There were also reservations about the ambiguity of the treaty text, and its potential implications for the digital economy, innovation and competition, plain packaging of tobacco products, and access to essential medicines. The treaty has provoked much soul-searching as to whether the Trick or Treaty reforms on the international treaty-making process in Australia have been compromised or undermined. Although ACTA stalled in the Australian Parliament, the debate over it is yet to conclude. There have been concerns in Australia and elsewhere that ACTA will be revived as a ‘zombie agreement’. Indeed, in March 2013, the Canadian government introduced a bill to ensure compliance with ACTA. Will it be also resurrected in Australia? Has it already been revived? There are three possibilities. First, the Australian government passed enhanced remedies with respect to piracy, counterfeiting and border measures in a separate piece of legislation – the Intellectual Property Laws Amendment (Raising the Bar) Act 2012 (Cth). Second, the Department of Foreign Affairs and Trade remains supportive of ACTA. It is possible, after further analysis, that the next Australian Parliament – to be elected in September 2013 – will ratify the treaty. Third, Australia is involved in the Trans-Pacific Partnership negotiations. The government has argued that ACTA should be a template for the Intellectual Property Chapter in the Trans-Pacific Partnership. The United States Trade Representative would prefer a regime even stronger than ACTA. This chapter provides a portrait of the Australian debate over ACTA. It is the account of an interested participant in the policy proceedings. This chapter will first consider the deliberations and recommendations of the Joint Standing Committee on Treaties on ACTA. Second, there was a concern that ACTA had failed to provide appropriate safeguards with respect to civil liberties, human rights, consumer protection and privacy laws. Third, there was a concern about the lack of balance in the treaty’s copyright measures; the definition of piracy is overbroad; the suite of civil remedies, criminal offences and border measures is excessive; and there is a lack of suitable protection for copyright exceptions, limitations and remedies. Fourth, there was a worry that the provisions on trademark law, intermediary liability and counterfeiting could have an adverse impact upon consumer interests, competition policy and innovation in the digital economy. Fifth, there was significant debate about the impact of ACTA on pharmaceutical drugs, access to essential medicines and health-care. Sixth, there was concern over the lobbying by tobacco industries for ACTA – particularly given Australia’s leadership on tobacco control and the plain packaging of tobacco products. Seventh, there were concerns about the operation of border measures in ACTA. Eighth, the Joint Standing Committee on Treaties was concerned about the jurisdiction of the ACTA Committee, and the treaty’s protean nature. Finally, the chapter raises fundamental issues about the relationship between the executive and the Australian Parliament with respect to treaty-making. There is a need to reconsider the efficacy of the Trick or Treaty reforms passed by the Australian Parliament in the 1990s.
Resumo:
“If Hollywood could order intellectual property laws for Christmas, what would they look like? This is pretty close.” David Fewer “While European and American IP maximalists have pushed for TRIPS-Plus provisions in FTAs and bilateral agreements, they are now pushing for TRIPS-Plus-Plus protections in these various forums.” Susan Sell “ACTA is a threat to the future of a free and open Internet.” Alexander Furnas “Implementing the agreement could open a Pandora's box of potential human rights violations.” Amnesty International. “I will not take part in this masquerade.” Kader Arif, Rapporteur for the Anti-Counterfeiting Trade Agreement 2011 in the European Parliament Executive Summary As an independent scholar and expert in intellectual property, I am of the view that the Australian Parliament should reject the adoption of the Anti-Counterfeiting Trade Agreement 2011. I would take issue with the Department of Foreign Affairs and Trade’s rather partisan account of the negotiations, the consultations, and the outcomes associated with the Anti-Counterfeiting Trade Agreement 2011. In my view, the negotiations were secretive and biased; the local consultations were sometimes farcical because of the lack of information about the draft texts of the agreement; and the final text of the Anti-Counterfeiting Trade Agreement 2011 is not in the best interests of Australia, particularly given that it is a net importer of copyright works and trade mark goods and services. I would also express grave reservations about the quality of the rather pitiful National Interest Analysis – and the lack of any regulatory impact statement – associated with the Anti-Counterfeiting Trade Agreement 2011. The assertion that the Anti-Counterfeiting Trade Agreement 2011 does not require legislative measures is questionable – especially given the United States Trade Representative has called the agreement ‘the highest-standard plurilateral agreement ever achieved concerning the enforcement of intellectual property rights.’ It is worthwhile reiterating that there has been much criticism of the secretive and partisan nature of the negotiations surrounding the Anti-Counterfeiting Trade Agreement 2011. Sean Flynn summarizes these concerns: "The negotiation process for ACTA has been a case study in establishing the conditions for effective industry capture of a lawmaking process. Instead of using the relatively transparent and inclusive multilateral processes, ACTA was launched through a closed and secretive “‘club approach’ in which like-minded jurisdictions define enforcement ‘membership’ rules and then invite other countries to join, presumably via other trade agreements.” The most influential developing countries, including Brazil, India, China and Russia, were excluded. Likewise, a series of manoeuvres ensured that public knowledge about the specifics of the agreement and opportunities for input into the process were severely limited. Negotiations were held with mere hours notice to the public as to when and where they would be convened, often in countries half away around the world from where public interest groups are housed. Once there, all negotiation processes were closed to the public. Draft texts were not released before or after most negotiating rounds, and meetings with stakeholders took place only behind closed doors and off the record. A public release of draft text, in April 2010, was followed by no public or on-the-record meetings with negotiators." Moreover, it is disturbing that the Anti-Counterfeiting Trade Agreement 2011 has been driven by ideology and faith, rather than by any evidence-based policy making Professor Duncan Matthews has raised significant questions about the quality of empirical evidence used to support the proposal of Anti-Counterfeiting Trade Agreement 2011: ‘There are concerns that statements about levels of counterfeiting and piracy are based either on customs seizures, with the actual quantities of infringing goods in free circulation in any particular market largely unknown, or on estimated losses derived from industry surveys.’ It is particularly disturbing that, in spite of past criticism, the Department of Foreign Affairs and Trade has supported the Anti-Counterfeiting Trade Agreement 2011, without engaging the Productivity Commission or the Treasury to do a proper economic analysis of the proposed treaty. Kader Arif, Rapporteur for the Anti-Counterfeiting Trade Agreement 2011 in the European Parliament, quit his position, and said of the process: "I want to denounce in the strongest possible manner the entire process that led to the signature of this agreement: no inclusion of civil society organisations, a lack of transparency from the start of the negotiations, repeated postponing of the signature of the text without an explanation being ever given, exclusion of the EU Parliament's demands that were expressed on several occasions in our assembly. As rapporteur of this text, I have faced never-before-seen manoeuvres from the right wing of this Parliament to impose a rushed calendar before public opinion could be alerted, thus depriving the Parliament of its right to expression and of the tools at its disposal to convey citizens' legitimate demands.” Everyone knows the ACTA agreement is problematic, whether it is its impact on civil liberties, the way it makes Internet access providers liable, its consequences on generic drugs manufacturing, or how little protection it gives to our geographical indications. This agreement might have major consequences on citizens' lives, and still, everything is being done to prevent the European Parliament from having its say in this matter. That is why today, as I release this report for which I was in charge, I want to send a strong signal and alert the public opinion about this unacceptable situation. I will not take part in this masquerade." There have been parallel concerns about the process and substance of the Anti-Counterfeiting Trade Agreement 2011 in the context of Australia. I have a number of concerns about the substance of the Anti-Counterfeiting Trade Agreement 2011. First, I am concerned that the Anti-Counterfeiting Trade Agreement 2011 fails to provide appropriate safeguards in respect of human rights, consumer protection, competition, and privacy laws. It is recommended that the new Joint Parliamentary Committee on Human Rights investigate this treaty. Second, I argue that there is a lack of balance to the copyright measures in the Anti-Counterfeiting Trade Agreement 2011 – the definition of piracy is overbroad; the suite of civil remedies, criminal offences, and border measures is excessive; and there is a lack of suitable protection for copyright exceptions, limitations, and remedies. Third, I discuss trade mark law, intermediary liability, and counterfeiting. I express my concerns, in this context, that the Anti-Counterfeiting Trade Agreement 2011 could have an adverse impact upon consumer interests, competition policy, and innovation in the digital economy. I also note, with concern, the lobbying by tobacco industries for the Anti-Counterfeiting Trade Agreement 2011 – and the lack of any recognition in the treaty for the capacity of countries to take measures of tobacco control under the World Health Organization Framework Convention on Tobacco Control. Fourth, I note that the Anti-Counterfeiting Trade Agreement 2011 provides no positive obligations to promote access to essential medicines. It is particularly lamentable that Australia and the United States of America have failed to implement the Doha Declaration on the TRIPS Agreement and Public Health 2001 and the WTO General Council Decision 2003. Fifth, I express concerns about the border measures in the Anti-Counterfeiting Trade Agreement 2011. Such measures lack balance – and unduly favour the interests of intellectual property owners over consumers, importers, and exporters. Moreover, such measures will be costly, as they involve shifting the burden of intellectual property enforcement to customs and border authorities. Interdicting, seizing, and destroying goods may also raise significant trade issues. Finally, I express concern that the Anti-Counterfeiting Trade Agreement 2011 undermines the role of existing international organisations, such as the United Nations, the World Intellectual Property Organization and the World Trade Organization, and subverts international initiatives such as the WIPO Development Agenda 2007. I also question the raison d'être, independence, transparency, and accountability of the proposed new ‘ACTA Committee’. In this context, I am concerned by the shift in the position of the Labor Party in its approach to international treaty-making in relation to intellectual property. The Australian Parliament adopted the Australia-United States Free Trade Agreement 2004, which included a large Chapter on intellectual property. The treaty was a ‘TRIPs-Plus’ agreement, because the obligations were much more extensive and prescriptive than those required under the multilateral framework established by the TRIPS Agreement 1994. During the debate over the Australia-United States Free Trade Agreement 2004, the Labor Party expressed the view that it would seek to mitigate the effects of the TRIPS-Plus Agreement, when at such time it gained power. Far from seeking to ameliorate the effects of the Australia-United States Free Trade Agreement 2004, the Labor Government would seek to lock Australia into a TRIPS-Double Plus Agreement – the Anti-Counterfeiting Trade Agreement 2011. There has not been a clear political explanation for this change in approach to international intellectual property. For both reasons of process and substance, I conclude that the Australian Parliament and the Australian Government should reject the Anti-Counterfeiting Trade Agreement 2011. The Australian Government would do better to endorse the Washington Declaration on Intellectual Property and the Public Interest 2011, and implement its outstanding obligations in respect of access to knowledge, access to essential medicines, and the WIPO Development Agenda 2007. The case study of the Anti-Counterfeiting Trade Agreement 2011 highlights the need for further reforms to the process by which Australia engages in international treaty-making.
Resumo:
Credit scores are the most widely used instruments to assess whether or not a person is a financial risk. Credit scoring has been so successful that it has expanded beyond lending and into our everyday lives, even to inform how insurers evaluate our health. The pervasive application of credit scoring has outpaced knowledge about why credit scores are such useful indicators of individual behavior. Here we test if the same factors that lead to poor credit scores also lead to poor health. Following the Dunedin (New Zealand) Longitudinal Study cohort of 1,037 study members, we examined the association between credit scores and cardiovascular disease risk and the underlying factors that account for this association. We find that credit scores are negatively correlated with cardiovascular disease risk. Variation in household income was not sufficient to account for this association. Rather, individual differences in human capital factors—educational attainment, cognitive ability, and self-control—predicted both credit scores and cardiovascular disease risk and accounted for ∼45% of the correlation between credit scores and cardiovascular disease risk. Tracing human capital factors back to their childhood antecedents revealed that the characteristic attitudes, behaviors, and competencies children develop in their first decade of life account for a significant portion (∼22%) of the link between credit scores and cardiovascular disease risk at midlife. We discuss the implications of these findings for policy debates about data privacy, financial literacy, and early childhood interventions.
Resumo:
Cloud computing technology has rapidly evolved over the last decade, offering an alternative way to store and work with large amounts of data. However data security remains an important issue particularly when using a public cloud service provider. The recent area of homomorphic cryptography allows computation on encrypted data, which would allow users to ensure data privacy on the cloud and increase the potential market for cloud computing. A significant amount of research on homomorphic cryptography appeared in the literature over the last few years; yet the performance of existing implementations of encryption schemes remains unsuitable for real time applications. One way this limitation is being addressed is through the use of graphics processing units (GPUs) and field programmable gate arrays (FPGAs) for implementations of homomorphic encryption schemes. This review presents the current state of the art in this promising new area of research and highlights the interesting remaining open problems.
Resumo:
Data management consists of collecting, storing, and processing the data into the format which provides value-adding information for decision-making process. The development of data management has enabled of designing increasingly effective database management systems to support business needs. Therefore as well as advanced systems are designed for reporting purposes, also operational systems allow reporting and data analyzing. The used research method in the theory part is qualitative research and the research type in the empirical part is case study. Objective of this paper is to examine database management system requirements from reporting managements and data managements perspectives. In the theory part these requirements are identified and the appropriateness of the relational data model is evaluated. In addition key performance indicators applied to the operational monitoring of production are studied. The study has revealed that the appropriate operational key performance indicators of production takes into account time, quality, flexibility and cost aspects. Especially manufacturing efficiency has been highlighted. In this paper, reporting management is defined as a continuous monitoring of given performance measures. According to the literature review, the data management tool should cover performance, usability, reliability, scalability, and data privacy aspects in order to fulfill reporting managements demands. A framework is created for the system development phase based on requirements, and is used in the empirical part of the thesis where such a system is designed and created for reporting management purposes for a company which operates in the manufacturing industry. Relational data modeling and database architectures are utilized when the system is built for relational database platform.
Resumo:
"Mémoire présenté à la Faculté des études supérieures en vue de l'obtention du grade de Maîtrise en LL.M. Droit - Recherche option Droit, Biotechnologies et Sociétés"
Resumo:
Les politiques de confidentialité définissent comment les services en ligne collectent, utilisent et partagent les données des utilisateurs. Bien qu’étant le principal moyen pour informer les usagers de l’utilisation de leurs données privées, les politiques de confidentialité sont en général ignorées par ces derniers. Pour cause, les utilisateurs les trouvent trop longues et trop vagues, elles utilisent un vocabulaire souvent difficile et n’ont pas de format standard. Les politiques de confidentialité confrontent également les utilisateurs à un dilemme : celui d’accepter obligatoirement tout le contenu en vue d’utiliser le service ou refuser le contenu sous peine de ne pas y avoir accès. Aucune autre option n’est accordée à l’utilisateur. Les données collectées des utilisateurs permettent aux services en ligne de leur fournir un service, mais aussi de les exploiter à des fins économiques (publicités ciblées, revente, etc). Selon diverses études, permettre aux utilisateurs de bénéficier de cette économie de la vie privée pourrait restaurer leur confiance et faciliter une continuité des échanges sur Internet. Dans ce mémoire, nous proposons un modèle de politique de confidentialité, inspiré du P3P (une recommandation du W3C, World Wide Web Consortium), en élargissant ses fonctionnalités et en réduisant sa complexité. Ce modèle suit un format bien défini permettant aux utilisateurs et aux services en ligne de définir leurs préférences et besoins. Les utilisateurs ont la possibilité de décider de l’usage spécifique et des conditions de partage de chacune de leurs données privées. Une phase de négociation permettra une analyse des besoins du service en ligne et des préférences de l’utilisateur afin d’établir un contrat de confidentialité. La valeur des données personnelles est un aspect important de notre étude. Alors que les compagnies disposent de moyens leur permettant d’évaluer cette valeur, nous appliquons dans ce mémoire, une méthode hiérarchique multicritères. Cette méthode va permettre également à chaque utilisateur de donner une valeur à ses données personnelles en fonction de l’importance qu’il y accorde. Dans ce modèle, nous intégrons également une autorité de régulation en charge de mener les négociations entre utilisateurs et services en ligne, et de générer des recommandations aux usagers en fonction de leur profil et des tendances.
Resumo:
Since the advent of the internet in every day life in the 1990s, the barriers to producing, distributing and consuming multimedia data such as videos, music, ebooks, etc. have steadily been lowered for most computer users so that almost everyone with internet access can join the online communities who both produce, consume and of course also share media artefacts. Along with this trend, the violation of personal data privacy and copyright has increased with illegal file sharing being rampant across many online communities particularly for certain music genres and amongst the younger age groups. This has had a devastating effect on the traditional media distribution market; in most cases leaving the distribution companies and the content owner with huge financial losses. To prove that a copyright violation has occurred one can deploy fingerprinting mechanisms to uniquely identify the property. However this is currently based on only uni-modal approaches. In this paper we describe some of the design challenges and architectural approaches to multi-modal fingerprinting currently being examined for evaluation studies within a PhD research programme on optimisation of multi-modal fingerprinting architectures. Accordingly we outline the available modalities that are being integrated through this research programme which aims to establish the optimal architecture for multi-modal media security protection over the internet as the online distribution environment for both legal and illegal distribution of media products.
Resumo:
In this article we explore the NVIDIA graphical processing units (GPU) computational power in cryptography using CUDA (Compute Unified Device Architecture) technology. CUDA makes the general purpose computing easy using the parallel processing presents in GPUs. To do this, the NVIDIA GPUs architectures and CUDA are presented, besides cryptography concepts. Furthermore, we do the comparison between the versions executed in CPU with the parallel version of the cryptography algorithms Advanced Encryption Standard (AES) and Message-digest Algorithm 5 (MD5) wrote in CUDA. © 2011 AISTI.
Resumo:
As distributed collaborative applications and architectures are adopting policy based management for tasks such as access control, network security and data privacy, the management and consolidation of a large number of policies is becoming a crucial component of such policy based systems. In large-scale distributed collaborative applications like web services, there is the need of analyzing policy interactions and integrating policies. In this thesis, we propose and implement EXAM-S, a comprehensive environment for policy analysis and management, which can be used to perform a variety of functions such as policy property analyses, policy similarity analysis, policy integration etc. As part of this environment, we have proposed and implemented new techniques for the analysis of policies that rely on a deep study of state of the art techniques. Moreover, we propose an approach for solving heterogeneity problems that usually arise when considering the analysis of policies belonging to different domains. Our work focuses on analysis of access control policies written in the dialect of XACML (Extensible Access Control Markup Language). We consider XACML policies because XACML is a rich language which can represent many policies of interest to real world applications and is gaining widespread adoption in the industry.
Resumo:
Facebook requires all members to use their real names and email addresses when joining the social network. Not only does the policy seem to be difficult to enforce (as the prevalence of accounts with people’s pets or fake names suggests), but it may also interfere with European (and, in particular, German) data protection laws. A German Data Protection Commissioner recently took action and ordered that Facebook permit pseudonymous accounts as its current anti-pseudonymous policy violates § 13 VI of the German Telemedia Act. This provision requires telemedia providers to allow for an anonymous or pseudonymous use of services insofar as this is reasonable and technically feasible. Irrespective of whether the pseudonymous use of Facebook is reasonable, the case can be narrowed down to one single question: Does German data protection law apply to Facebook? In that respect, this paper analyses the current Facebook dispute, in particular in relation to who controls the processing of personal data of Facebook users in Germany. It also briefly discusses whether a real name policy really presents a fix for anti-normative and anti-social behaviour on the Internet.
Resumo:
Under the brand name “sciebo – the Campuscloud” (derived from “science box”) a consortium of more than 20 research and applied science universities started a large scale cloud service for about 500,000 students and researchers in North Rhine-Westphalia, Germany’s most populous state. Starting with the much anticipated data privacy compliant sync & share functionality, sciebo offers the potential to become a more general cloud platform for collaboration and research data management which will be actively pursued in upcoming scientific and infrastructural projects. This project report describes the formation of the venture, its targets and the technical and the legal solution as well as the current status and the next steps.
Resumo:
Protecting different kinds of information has become an important area of research. One aspect is to provide effective means to avoid that secrets can be deduced from the answers of legitimate queries. In the context of atomic propositional databases several methods have been developed to achieve this goal. However, in those databases it is not possible to formalize structural information. Also they are quite restrictive with respect to the specification of secrets. In this paper we extend those methods to match the much greater expressive power of Boolean description logics. In addition to the formal framework, we provide a discussion of various kinds of censors and establish different levels of security they can provide.
Resumo:
Esta Tesis aborda los problemas de eficiencia de las redes eléctrica desde el punto de vista del consumo. En particular, dicha eficiencia es mejorada mediante el suavizado de la curva de consumo agregado. Este objetivo de suavizado de consumo implica dos grandes mejoras en el uso de las redes eléctricas: i) a corto plazo, un mejor uso de la infraestructura existente y ii) a largo plazo, la reducción de la infraestructura necesaria para suplir las mismas necesidades energéticas. Además, esta Tesis se enfrenta a un nuevo paradigma energético, donde la presencia de generación distribuida está muy extendida en las redes eléctricas, en particular, la generación fotovoltaica (FV). Este tipo de fuente energética afecta al funcionamiento de la red, incrementando su variabilidad. Esto implica que altas tasas de penetración de electricidad de origen fotovoltaico es perjudicial para la estabilidad de la red eléctrica. Esta Tesis trata de suavizar la curva de consumo agregado considerando esta fuente energética. Por lo tanto, no sólo se mejora la eficiencia de la red eléctrica, sino que también puede ser aumentada la penetración de electricidad de origen fotovoltaico en la red. Esta propuesta conlleva grandes beneficios en los campos económicos, social y ambiental. Las acciones que influyen en el modo en que los consumidores hacen uso de la electricidad con el objetivo producir un ahorro energético o un aumento de eficiencia son llamadas Gestión de la Demanda Eléctrica (GDE). Esta Tesis propone dos algoritmos de GDE diferentes para cumplir con el objetivo de suavizado de la curva de consumo agregado. La diferencia entre ambos algoritmos de GDE reside en el marco en el cual estos tienen lugar: el marco local y el marco de red. Dependiendo de este marco de GDE, el objetivo energético y la forma en la que se alcanza este objetivo son diferentes. En el marco local, el algoritmo de GDE sólo usa información local. Este no tiene en cuenta a otros consumidores o a la curva de consumo agregado de la red eléctrica. Aunque esta afirmación pueda diferir de la definición general de GDE, esta vuelve a tomar sentido en instalaciones locales equipadas con Recursos Energéticos Distribuidos (REDs). En este caso, la GDE está enfocada en la maximización del uso de la energía local, reduciéndose la dependencia con la red. El algoritmo de GDE propuesto mejora significativamente el auto-consumo del generador FV local. Experimentos simulados y reales muestran que el auto-consumo es una importante estrategia de gestión energética, reduciendo el transporte de electricidad y alentando al usuario a controlar su comportamiento energético. Sin embargo, a pesar de todas las ventajas del aumento de auto-consumo, éstas no contribuyen al suavizado del consumo agregado. Se han estudiado los efectos de las instalaciones locales en la red eléctrica cuando el algoritmo de GDE está enfocado en el aumento del auto-consumo. Este enfoque puede tener efectos no deseados, incrementando la variabilidad en el consumo agregado en vez de reducirlo. Este efecto se produce porque el algoritmo de GDE sólo considera variables locales en el marco local. Los resultados sugieren que se requiere una coordinación entre las instalaciones. A través de esta coordinación, el consumo debe ser modificado teniendo en cuenta otros elementos de la red y buscando el suavizado del consumo agregado. En el marco de la red, el algoritmo de GDE tiene en cuenta tanto información local como de la red eléctrica. En esta Tesis se ha desarrollado un algoritmo autoorganizado para controlar el consumo de la red eléctrica de manera distribuida. El objetivo de este algoritmo es el suavizado del consumo agregado, como en las implementaciones clásicas de GDE. El enfoque distribuido significa que la GDE se realiza desde el lado de los consumidores sin seguir órdenes directas emitidas por una entidad central. Por lo tanto, esta Tesis propone una estructura de gestión paralela en lugar de una jerárquica como en las redes eléctricas clásicas. Esto implica que se requiere un mecanismo de coordinación entre instalaciones. Esta Tesis pretende minimizar la cantidad de información necesaria para esta coordinación. Para lograr este objetivo, se han utilizado dos técnicas de coordinación colectiva: osciladores acoplados e inteligencia de enjambre. La combinación de estas técnicas para llevar a cabo la coordinación de un sistema con las características de la red eléctrica es en sí mismo un enfoque novedoso. Por lo tanto, este objetivo de coordinación no es sólo una contribución en el campo de la gestión energética, sino también en el campo de los sistemas colectivos. Los resultados muestran que el algoritmo de GDE propuesto reduce la diferencia entre máximos y mínimos de la red eléctrica en proporción a la cantidad de energía controlada por el algoritmo. Por lo tanto, conforme mayor es la cantidad de energía controlada por el algoritmo, mayor es la mejora de eficiencia en la red eléctrica. Además de las ventajas resultantes del suavizado del consumo agregado, otras ventajas surgen de la solución distribuida seguida en esta Tesis. Estas ventajas se resumen en las siguientes características del algoritmo de GDE propuesto: • Robustez: en un sistema centralizado, un fallo o rotura del nodo central provoca un mal funcionamiento de todo el sistema. La gestión de una red desde un punto de vista distribuido implica que no existe un nodo de control central. Un fallo en cualquier instalación no afecta el funcionamiento global de la red. • Privacidad de datos: el uso de una topología distribuida causa de que no hay un nodo central con información sensible de todos los consumidores. Esta Tesis va más allá y el algoritmo propuesto de GDE no utiliza información específica acerca de los comportamientos de los consumidores, siendo la coordinación entre las instalaciones completamente anónimos. • Escalabilidad: el algoritmo propuesto de GDE opera con cualquier número de instalaciones. Esto implica que se permite la incorporación de nuevas instalaciones sin afectar a su funcionamiento. • Bajo coste: el algoritmo de GDE propuesto se adapta a las redes actuales sin requisitos topológicos. Además, todas las instalaciones calculan su propia gestión con un bajo requerimiento computacional. Por lo tanto, no se requiere un nodo central con un alto poder de cómputo. • Rápido despliegue: las características de escalabilidad y bajo coste de los algoritmos de GDE propuestos permiten una implementación rápida. No se requiere una planificación compleja para el despliegue de este sistema. ABSTRACT This Thesis addresses the efficiency problems of the electrical grids from the consumption point of view. In particular, such efficiency is improved by means of the aggregated consumption smoothing. This objective of consumption smoothing entails two major improvements in the use of electrical grids: i) in the short term, a better use of the existing infrastructure and ii) in long term, the reduction of the required infrastructure to supply the same energy needs. In addition, this Thesis faces a new energy paradigm, where the presence of distributed generation is widespread over the electrical grids, in particular, the Photovoltaic (PV) generation. This kind of energy source affects to the operation of the grid by increasing its variability. This implies that a high penetration rate of photovoltaic electricity is pernicious for the electrical grid stability. This Thesis seeks to smooth the aggregated consumption considering this energy source. Therefore, not only the efficiency of the electrical grid is improved, but also the penetration of photovoltaic electricity into the grid can be increased. This proposal brings great benefits in the economic, social and environmental fields. The actions that influence the way that consumers use electricity in order to achieve energy savings or higher efficiency in energy use are called Demand-Side Management (DSM). This Thesis proposes two different DSM algorithms to meet the aggregated consumption smoothing objective. The difference between both DSM algorithms lie in the framework in which they take place: the local framework and the grid framework. Depending on the DSM framework, the energy goal and the procedure to reach this goal are different. In the local framework, the DSM algorithm only uses local information. It does not take into account other consumers or the aggregated consumption of the electrical grid. Although this statement may differ from the general definition of DSM, it makes sense in local facilities equipped with Distributed Energy Resources (DERs). In this case, the DSM is focused on the maximization of the local energy use, reducing the grid dependence. The proposed DSM algorithm significantly improves the self-consumption of the local PV generator. Simulated and real experiments show that self-consumption serves as an important energy management strategy, reducing the electricity transport and encouraging the user to control his energy behavior. However, despite all the advantages of the self-consumption increase, they do not contribute to the smooth of the aggregated consumption. The effects of the local facilities on the electrical grid are studied when the DSM algorithm is focused on self-consumption maximization. This approach may have undesirable effects, increasing the variability in the aggregated consumption instead of reducing it. This effect occurs because the algorithm only considers local variables in the local framework. The results suggest that coordination between these facilities is required. Through this coordination, the consumption should be modified by taking into account other elements of the grid and seeking for an aggregated consumption smoothing. In the grid framework, the DSM algorithm takes into account both local and grid information. This Thesis develops a self-organized algorithm to manage the consumption of an electrical grid in a distributed way. The goal of this algorithm is the aggregated consumption smoothing, as the classical DSM implementations. The distributed approach means that the DSM is performed from the consumers side without following direct commands issued by a central entity. Therefore, this Thesis proposes a parallel management structure rather than a hierarchical one as in the classical electrical grids. This implies that a coordination mechanism between facilities is required. This Thesis seeks for minimizing the amount of information necessary for this coordination. To achieve this objective, two collective coordination techniques have been used: coupled oscillators and swarm intelligence. The combination of these techniques to perform the coordination of a system with the characteristics of the electric grid is itself a novel approach. Therefore, this coordination objective is not only a contribution in the energy management field, but in the collective systems too. Results show that the proposed DSM algorithm reduces the difference between the maximums and minimums of the electrical grid proportionally to the amount of energy controlled by the system. Thus, the greater the amount of energy controlled by the algorithm, the greater the improvement of the efficiency of the electrical grid. In addition to the advantages resulting from the smoothing of the aggregated consumption, other advantages arise from the distributed approach followed in this Thesis. These advantages are summarized in the following features of the proposed DSM algorithm: • Robustness: in a centralized system, a failure or breakage of the central node causes a malfunction of the whole system. The management of a grid from a distributed point of view implies that there is not a central control node. A failure in any facility does not affect the overall operation of the grid. • Data privacy: the use of a distributed topology causes that there is not a central node with sensitive information of all consumers. This Thesis goes a step further and the proposed DSM algorithm does not use specific information about the consumer behaviors, being the coordination between facilities completely anonymous. • Scalability: the proposed DSM algorithm operates with any number of facilities. This implies that it allows the incorporation of new facilities without affecting its operation. • Low cost: the proposed DSM algorithm adapts to the current grids without any topological requirements. In addition, every facility calculates its own management with low computational requirements. Thus, a central computational node with a high computational power is not required. • Quick deployment: the scalability and low cost features of the proposed DSM algorithms allow a quick deployment. A complex schedule of the deployment of this system is not required.
Resumo:
Recientemente, el paradigma de la computación en la nube ha recibido mucho interés por parte tanto de la industria como del mundo académico. Las infraestructuras cloud públicas están posibilitando nuevos modelos de negocio y ayudando a reducir costes. Sin embargo, una compañía podría desear ubicar sus datos y servicios en sus propias instalaciones, o tener que atenerse a leyes de protección de datos. Estas circunstancias hacen a las infraestructuras cloud privadas ciertamente deseables, ya sea para complementar a las públicas o para sustituirlas por completo. Por desgracia, las carencias en materia de estándares han impedido que las soluciones para la gestión de infraestructuras privadas se hayan desarrollado adecuadamente. Además, la multitud de opciones disponibles ha creado en los clientes el miedo a depender de una tecnología concreta (technology lock-in). Una de las causas de este problema es la falta de alineación entre la investigación académica y los productos comerciales, ya que aquella está centrada en el estudio de escenarios idealizados sin correspondencia con el mundo real, mientras que éstos consisten en soluciones desarrolladas sin tener en cuenta cómo van a encajar con los estándares más comunes o sin preocuparse de hacer públicos sus resultados. Con objeto de resolver este problema, propongo un sistema de gestión modular para infraestructuras cloud privadas enfocado en tratar con las aplicaciones en lugar de centrarse únicamente en los recursos hardware. Este sistema de gestión sigue el paradigma de la computación autónoma y está diseñado en torno a un modelo de información sencillo, desarrollado para ser compatible con los estándares más comunes. Este modelo divide el entorno en dos vistas, que sirven para separar aquello que debe preocupar a cada actor involucrado del resto de información, pero al mismo tiempo permitiendo relacionar el entorno físico con las máquinas virtuales que se despliegan encima de él. En dicho modelo, las aplicaciones cloud están divididas en tres tipos genéricos (Servicios, Trabajos de Big Data y Reservas de Instancias), para que así el sistema de gestión pueda sacar partido de las características propias de cada tipo. El modelo de información está complementado por un conjunto de acciones de gestión atómicas, reversibles e independientes, que determinan las operaciones que se pueden llevar a cabo sobre el entorno y que es usado para hacer posible la escalabilidad en el entorno. También describo un motor de gestión encargado de, a partir del estado del entorno y usando el ya mencionado conjunto de acciones, la colocación de recursos. Está dividido en dos niveles: la capa de Gestores de Aplicación, encargada de tratar sólo con las aplicaciones; y la capa del Gestor de Infraestructura, responsable de los recursos físicos. Dicho motor de gestión obedece un ciclo de vida con dos fases, para así modelar mejor el comportamiento de una infraestructura real. El problema de la colocación de recursos es atacado durante una de las fases (la de consolidación) por un resolutor de programación entera, y durante la otra (la online) por un heurístico hecho ex-profeso. Varias pruebas han demostrado que este acercamiento combinado es superior a otras estrategias. Para terminar, el sistema de gestión está acoplado a arquitecturas de monitorización y de actuadores. Aquella estando encargada de recolectar información del entorno, y ésta siendo modular en su diseño y capaz de conectarse con varias tecnologías y ofrecer varios modos de acceso. ABSTRACT The cloud computing paradigm has raised in popularity within the industry and the academia. Public cloud infrastructures are enabling new business models and helping to reduce costs. However, the desire to host company’s data and services on premises, and the need to abide to data protection laws, make private cloud infrastructures desirable, either to complement or even fully substitute public oferings. Unfortunately, a lack of standardization has precluded private infrastructure management solutions to be developed to a certain level, and a myriad of diferent options have induced the fear of lock-in in customers. One of the causes of this problem is the misalignment between academic research and industry ofering, with the former focusing in studying idealized scenarios dissimilar from real-world situations, and the latter developing solutions without taking care about how they f t with common standards, or even not disseminating their results. With the aim to solve this problem I propose a modular management system for private cloud infrastructures that is focused on the applications instead of just the hardware resources. This management system follows the autonomic system paradigm, and is designed around a simple information model developed to be compatible with common standards. This model splits the environment in two views that serve to separate the concerns of the stakeholders while at the same time enabling the traceability between the physical environment and the virtual machines deployed onto it. In it, cloud applications are classifed in three broad types (Services, Big Data Jobs and Instance Reservations), in order for the management system to take advantage of each type’s features. The information model is paired with a set of atomic, reversible and independent management actions which determine the operations that can be performed over the environment and is used to realize the cloud environment’s scalability. From the environment’s state and using the aforementioned set of actions, I also describe a management engine tasked with the resource placement. It is divided in two tiers: the Application Managers layer, concerned just with applications; and the Infrastructure Manager layer, responsible of the actual physical resources. This management engine follows a lifecycle with two phases, to better model the behavior of a real infrastructure. The placement problem is tackled during one phase (consolidation) by using an integer programming solver, and during the other (online) with a custom heuristic. Tests have demonstrated that this combined approach is superior to other strategies. Finally, the management system is paired with monitoring and actuators architectures. The former able to collect the necessary information from the environment, and the later modular in design and capable of interfacing with several technologies and ofering several access interfaces.