919 resultados para information control
Resumo:
Modern power networks incorporate communications and information technology infrastructure into the electrical power system to create a smart grid in terms of control and operation. The smart grid enables real-time communication and control between consumers and utility companies allowing suppliers to optimize energy usage based on price preference and system technical issues. The smart grid design aims to provide overall power system monitoring, create protection and control strategies to maintain system performance, stability and security. This dissertation contributed to the development of a unique and novel smart grid test-bed laboratory with integrated monitoring, protection and control systems. This test-bed was used as a platform to test the smart grid operational ideas developed here. The implementation of this system in the real-time software creates an environment for studying, implementing and verifying novel control and protection schemes developed in this dissertation. Phasor measurement techniques were developed using the available Data Acquisition (DAQ) devices in order to monitor all points in the power system in real time. This provides a practical view of system parameter changes, system abnormal conditions and its stability and security information system. These developments provide valuable measurements for technical power system operators in the energy control centers. Phasor Measurement technology is an excellent solution for improving system planning, operation and energy trading in addition to enabling advanced applications in Wide Area Monitoring, Protection and Control (WAMPAC). Moreover, a virtual protection system was developed and implemented in the smart grid laboratory with integrated functionality for wide area applications. Experiments and procedures were developed in the system in order to detect the system abnormal conditions and apply proper remedies to heal the system. A design for DC microgrid was developed to integrate it to the AC system with appropriate control capability. This system represents realistic hybrid AC/DC microgrids connectivity to the AC side to study the use of such architecture in system operation to help remedy system abnormal conditions. In addition, this dissertation explored the challenges and feasibility of the implementation of real-time system analysis features in order to monitor the system security and stability measures. These indices are measured experimentally during the operation of the developed hybrid AC/DC microgrids. Furthermore, a real-time optimal power flow system was implemented to optimally manage the power sharing between AC generators and DC side resources. A study relating to real-time energy management algorithm in hybrid microgrids was performed to evaluate the effects of using energy storage resources and their use in mitigating heavy load impacts on system stability and operational security.
Resumo:
In his study - File Control: The Heart Of Business Computer Management - William G. O'Brien, Assistant Professor, The School of Hospitality Management at Florida International University, initially informs you: “Even though computers are an everyday part of the hospitality industry, many managers lack the knowledge and experience to control and protect the files in these systems. The author offers guidelines which can minimize or prevent damage to the business as a whole.” Our author initially opens this study with some anecdotal instances illustrating the failure of hospitality managers to exercise due caution with regard to computer supported information systems inside their restaurants and hotels. “Of the three components that make up any business computer system (data files, programs, and hard-ware), it is files that are most important, perhaps irreplaceable, to the business,” O’Brien informs you. O’Brien breaks down the noun, files, into two distinct categories. They are, the files of extrinsic value, and its counterpart the files of intrinsic value. An example of extrinsic value files would be a restaurant’s wine inventory. “As sales are made and new shipments are received, the computer updates the file,” says O’Brien. “This information might come directly from a point-of-sale terminal or might be entered manually by an employee,” he further explains. On the intrinsic side of the equation, O’Brien wants you to know that the information itself is the valuable part of this type of file. Its value is over and above the file’s informational purpose as a pragmatic business tool, as it is in inventory control. “The information is money in the legal sense For instance, figures moved about in banking system computers do not represent dollars; they are dollars,” O’Brien explains. “If the record of a dollar amount is erased from all computer files, then that money ceases to exist,” he warns. This type of information can also be bought and sold, such as it is in customer lists to advertisers. Files must be protected O’Brien stresses. “File security requires a systematic approach,” he discloses. O’Brien goes on to explain important elements to consider when evaluating file information. File back-up is also an important factor to think about, along with file storage/safety concerns. “Sooner or later, every property will have its fire, flood, careless mistake, or disgruntled employee,” O’Brien closes. “…good file control can minimize or prevent damage to the business as a whole.”
Resumo:
Protecting confidential information from improper disclosure is a fundamental security goal. While encryption and access control are important tools for ensuring confidentiality, they cannot prevent an authorized system from leaking confidential information to its publicly observable outputs, whether inadvertently or maliciously. Hence, secure information flow aims to provide end-to-end control of information flow. Unfortunately, the traditionally-adopted policy of noninterference, which forbids all improper leakage, is often too restrictive. Theories of quantitative information flow address this issue by quantifying the amount of confidential information leaked by a system, with the goal of showing that it is intuitively "small" enough to be tolerated. Given such a theory, it is crucial to develop automated techniques for calculating the leakage in a system. ^ This dissertation is concerned with program analysis for calculating the maximum leakage, or capacity, of confidential information in the context of deterministic systems and under three proposed entropy measures of information leakage: Shannon entropy leakage, min-entropy leakage, and g-leakage. In this context, it turns out that calculating the maximum leakage of a program reduces to counting the number of possible outputs that it can produce. ^ The new approach introduced in this dissertation is to determine two-bit patterns, the relationships among pairs of bits in the output; for instance we might determine that two bits must be unequal. By counting the number of solutions to the two-bit patterns, we obtain an upper bound on the number of possible outputs. Hence, the maximum leakage can be bounded. We first describe a straightforward computation of the two-bit patterns using an automated prover. We then show a more efficient implementation that uses an implication graph to represent the two- bit patterns. It efficiently constructs the graph through the use of an automated prover, random executions, STP counterexamples, and deductive closure. The effectiveness of our techniques, both in terms of efficiency and accuracy, is shown through a number of case studies found in recent literature. ^
Resumo:
The future power grid will effectively utilize renewable energy resources and distributed generation to respond to energy demand while incorporating information technology and communication infrastructure for their optimum operation. This dissertation contributes to the development of real-time techniques, for wide-area monitoring and secure real-time control and operation of hybrid power systems. ^ To handle the increased level of real-time data exchange, this dissertation develops a supervisory control and data acquisition (SCADA) system that is equipped with a state estimation scheme from the real-time data. This system is verified on a specially developed laboratory-based test bed facility, as a hardware and software platform, to emulate the actual scenarios of a real hybrid power system with the highest level of similarities and capabilities to practical utility systems. It includes phasor measurements at hundreds of measurement points on the system. These measurements were obtained from especially developed laboratory based Phasor Measurement Unit (PMU) that is utilized in addition to existing commercially based PMU’s. The developed PMU was used in conjunction with the interconnected system along with the commercial PMU’s. The tested studies included a new technique for detecting the partially islanded micro grids in addition to several real-time techniques for synchronization and parameter identifications of hybrid systems. ^ Moreover, due to numerous integration of renewable energy resources through DC microgrids, this dissertation performs several practical cases for improvement of interoperability of such systems. Moreover, increased number of small and dispersed generating stations and their need to connect fast and properly into the AC grids, urged this work to explore the challenges that arise in synchronization of generators to the grid and through introduction of a Dynamic Brake system to improve the process of connecting distributed generators to the power grid.^ Real time operation and control requires data communication security. A research effort in this dissertation was developed based on Trusted Sensing Base (TSB) process for data communication security. The innovative TSB approach improves the security aspect of the power grid as a cyber-physical system. It is based on available GPS synchronization technology and provides protection against confidentiality attacks in critical power system infrastructures. ^
Resumo:
The theoretical construct of control has been defined as necessary (Etzioni, 1965), ubiquitous (Vickers, 1967), and on-going (E. Langer, 1983). Empirical measures, however, have not adequately given meaning to this potent construct, especially within complex organizations such as schools. Four stages of theory-development and empirical testing of school building managerial control using principals and teachers working within the nation's fourth largest district are presented in this dissertation as follows: (1) a review and synthesis of social science theories of control across the literatures of organizational theory, political science, sociology, psychology, and philosophy; (2) a systematic analysis of school managerial activities performed at the building level within the context of curricular and instructional tasks; (3) the development of a survey questionnaire to measure school building managerial control; and (4) initial tests of construct validity including inter-item reliability statistics, principal components analyses, and multivariate tests of significance. The social science synthesis provided support of four managerial control processes: standards, information, assessment, and incentives. The systematic analysis of school managerial activities led to further categorization between structural frequency of behaviors and discretionary qualities of behaviors across each of the control processes and the curricular and instructional tasks. Teacher survey responses (N=486) reported a significant difference between these two dimensions of control, structural frequency and discretionary qualities, for standards, information, and assessments, but not for incentives. The descriptive model of school managerial control suggests that (1) teachers perceive structural and discretionary managerial behaviors under information and incentives more clearly than activities representing standards or assessments, (2) standards are primarily structural while assessments are primarily qualitative, (3) teacher satisfaction is most closely related to the equitable distribution of incentives, (4) each of the structural managerial behaviors has a qualitative effect on teachers, and that (5) certain qualities of managerial behaviors are perceived by teachers as distinctly discretionary, apart from school structure. The variables of teacher tenure and school effectiveness reported significant effects on school managerial control processes, while instructional levels (elementary, junior, and senior) and individual school differences were not found to be significant for the construct of school managerial control.
Resumo:
Collaborative sharing of information is becoming much more needed technique to achieve complex goals in today's fast-paced tech-dominant world. Personal Health Record (PHR) system has become a popular research area for sharing patients informa- tion very quickly among health professionals. PHR systems store and process sensitive information, which should have proper security mechanisms to protect patients' private data. Thus, access control mechanisms of the PHR should be well-defined. Secondly, PHRs should be stored in encrypted form. Cryptographic schemes offering a more suitable solution for enforcing access policies based on user attributes are needed for this purpose. Attribute-based encryption can resolve these problems, we propose a patient-centric framework that protects PHRs against untrusted service providers and malicious users. In this framework, we have used Ciphertext Policy Attribute Based Encryption scheme as an efficient cryptographic technique, enhancing security and privacy of the system, as well as enabling access revocation. Patients can encrypt their PHRs and store them on untrusted storage servers. They also maintain full control over access to their PHR data by assigning attribute-based access control to selected data users, and revoking unauthorized users instantly. In order to evaluate our system, we implemented CP-ABE library and web services as part of our framework. We also developed an android application based on the framework that allows users to register into the system, encrypt their PHR data and upload to the server, and at the same time authorized users can download PHR data and decrypt it. Finally, we present experimental results and performance analysis. It shows that the deployment of the proposed system would be practical and can be applied into practice.
Resumo:
Erasure control coding has been exploited in communication networks with an aim to improve the end-to-end performance of data delivery across the network. To address the concerns over the strengths and constraints of erasure coding schemes in this application, we examine the performance limits of two erasure control coding strategies, forward erasure recovery and adaptive erasure recovery. Our investigation shows that the throughput of a network using an (n, k) forward erasure control code is capped by r =k/n when the packet loss rate p ≤ (te/n) and by k(l-p)/(n-te) when p > (t e/n), where te is the erasure control capability of the code. It also shows that the lower bound of the residual loss rate of such a network is (np-te)/(n-te) for (te/n) < p ≤ 1. Especially, if the code used is maximum distance separable, the Shannon capacity of the erasure channel, i.e. 1-p, can be achieved and the residual loss rate is lower bounded by (p+r-1)/r, for (1-r) < p ≤ 1. To address the requirements in real-time applications, we also investigate the service completion time of different schemes. It is revealed that the latency of the forward erasure recovery scheme is fractionally higher than that of the scheme without erasure control coding or retransmission mechanisms (using UDP), but much lower than that of the adaptive erasure scheme when the packet loss rate is high. Results on comparisons between the two erasure control schemes exhibit their advantages as well as disadvantages in the role of delivering end-to-end services. To show the impact of the bounds derived on the end-to-end performance of a TCP/IP network, a case study is provided to demonstrate how erasure control coding could be used to maximize the performance of practical systems. © 2010 IEEE.
Resumo:
Extracting wave energy from seas has been proven to be very difficult although various technologies have been developed since 1970s. Among the proposed technologies, only few of them have been actually progressed to the advanced stages such as sea trials or pre-commercial sea trial and engineering. One critical question may be how we can design an efficient wave energy converter or how the efficiency of a wave energy converter can be improved using optimal and control technologies, because higher energy conversion efficiency for a wave energy converter is always pursued and it mainly decides the cost of the wave energy production. In this first part of the investigation, some conventional optimal and control technologies for improving wave energy conversion are examined in a form of more physical meanings, rather than the purely complex mathematical expressions, in which it is hoped to clarify some confusions in the development and the terminologies of the technologies and to help to understand the physics behind the optimal and control technologies. As a result of the understanding of the physics and the principles of the optima, a new latching technology is proposed, in which the latching duration is simply calculated from the wave period, rather than based on the future information/prediction, hence the technology could remove one of the technical barriers in implementing this control technology. From the examples given in the context, this new latching control technology can achieve a phase optimum in regular waves, and hence significantly improve wave energy conversion. Further development on this latching control technologies can be found in the second part of the investigation.
Resumo:
While a great amount of attention is being given to the development of nanodevices, both through academic research and private industry, the field is still on the verge. Progress hinges upon the development of tools and components that can precisely control the interaction between light and matter, and that can be efficiently integrated into nano-devices. Nanofibers are one of the most promising candidates for such purposes. However, in order to fully exploit their potential, a more intimate knowledge of how nanofibers interact with single neutral atoms must be gained. As we learn more about the properties of nanofiber modes, and the way they interface with atoms, and as the technology develops that allows them to be prepared with more precisely known properties, they become more and more adaptable and effective. The work presented in this thesis touches on many topics, which is testament to the broad range of applications and high degree of promise that nanofibers hold. For immediate use, we need to fully grasp how they can be best implemented as sensors, filters, detectors, and switches in existing nano-technologies. Areas of interest also include how they might be best exploited for probing atom-surface interactions, single-atom detection and single photon generation. Nanofiber research is also motivated by their potential integration into fundamental cold atom quantum experiments, and the role they can play there. Combining nanofibers with existing optical and quantum technologies is a powerful strategy for advancing areas like quantum computation, quantum information processing, and quantum communication. In this thesis I present a variety of theoretical work, which explores a range of the applications listed above. The first work presented concerns the use of the evanescent fields around a nanofiber to manipulate an existing trapping geometry and therefore influence the centre-of-mass dynamics of the atom. The second work presented explores interesting trapping geometries that can be achieved in the vicinity of a fiber in which just four modes are allowed to propagate. In a third study I explore the use of a nanofiber as a detector of small numbers of photons by calculating the rate of emission into the fiber modes when the fiber is moved along next to a regularly separated array of atoms. Also included are some results from a work in progress, where I consider the scattered field that appears along the nanofiber axis when a small number of atoms trapped along that axis are illuminated orthogonally; some interesting preliminary results are outlined. Finally, in contrast with the rest of the thesis, I consider some interesting physics that can be done in one of the trapping geometries that can be created around the fiber, here I explore the ground states of a phase separated two-component superfluid Bose-Einstein condensate trapped in a toroidal potential.
Resumo:
Veterinary medicines (VMs) from agricultural industry can enter the environment in a number of ways. This includes direct exposure through aquaculture, accidental spillage and disposal, and indirect entry by leaching from manure or runoff after treatment. Many compounds used in animal treatments have ecotoxic properties that may have chronic or sometimes lethal effects when they come into contact with non-target organisms. VMs enter the environment in mixtures, potentially having additive effects. Traditional ecotoxicology tests are used to determine the lethal and sometimes reproductive effects on freshwater and terrestrial organisms. However, organisms used in ecotoxicology tests can be unrepresentative of the populations that are likely to be exposed to the compound in the environment. Most often the tests are on single compound toxicity but mixture effects may be significant and should be included in ecotoxicology testing. This work investigates the use, measured environmental concentrations (MECs) and potential impact of sea lice treatments on salmon farms in Scotland. Alternative methods for ecotoxicology testing including mixture toxicity, and the use of in silico techniques to predict the chronic impact of VMs on different species of aquatic organisms were also investigated. The Scottish Environmental Protection Agency (SEPA) provided information on the use of five sea lice treatments from 2008-2011 on Scottish salmon farms. This information was combined with the recently available data on sediment MECs for the years 2009-2012 provided by SEPA using ArcGIS 10.1. In depth analysis of this data showed that from a total of 55 sites, 30 sites had a MEC higher than the maximum allowable concentration (MAC) as set out by SEPA for emamectin benzoate and 7 sites had a higher MEC than MAC for teflubenzuron. A number of sites that were up to 16 km away from the nearest salmon farm reported as using either emamectin benzoate or teflubenzuron measured positive for the two treatments. There was no relationship between current direction and the distribution of the sea lice treatments, nor was there any evidence for alternative sources of the compounds e.g. land treatments. The sites that had MECs higher than the MAC could pose a risk to non-target organisms and disrupt the species dynamics of the area. There was evidence that some marine protected sites might be at risk of exposure to these compounds. To complement this work, effects on acute mixture toxicity of the 5 sea lice treatments, plus one major metabolite 3-phenoxybenzoic acid (3PBA), were measured using an assay using the bioluminescent bacteria Aliivibrio fischeri. When exposed to the 5 sea lice treatments and 3PBA A. fischeri showed a response to 3PBA, emamectin benzoate and azamethiphos as well as combinations of the three. In order to establish any additive effect of the sea lice treatments, the efficacy of two mixture prediction equations, concentration addition (CA) and independent action ii(IA) were tested using the results from single compound dose response curves. In this instance IA was the more effective prediction method with a linear regression confidence interval of 82.6% compared with 22.6% of CA. In silico molecular docking was carried out to predict the chronic effects of 15 VMs (including the five used as sea lice control). Molecular docking has been proposed as an alternative screening method for the chronic effects of large animal treatments on non-target organisms. Oestrogen receptor alpha (ERα) of 7 non-target bony fish and the African clawed frog Xenopus laevis were modelled using SwissModel. These models were then ‘docked’ to oestradiol, the synthetic oestrogen ethinylestradiol, two known xenoestrogens dichlorodiphenyltrichloroethane (DDT) and bisphenol A (BPA), the antioestrogen breast cancer treatment tamoxifen and 15 VMs using Auto Dock 4. Based on the results of this work, four VMs were identified as being possible xenoestrogens or anti-oestrogens; these were cypermethrin, deltamethrin, fenbendazole and teflubenzuron. Further investigation, using in vitro assays, into these four VMs has been suggested as future work. A modified recombinant yeast oestrogen screen (YES) was attempted using the cDNA of the ERα of the zebrafish Danio rerio and the rainbow trout Oncorhynchus mykiss. Due to time and difficulties in cloning protocols this work was unable to be completed. Use of such in vitro assays would allow for further investigation of the highlighted VMs into their oestrogenic potential. In conclusion, VMs used as sea lice treatments, such as teflubenzuron and emamectin benzoate may be more persistent and have a wider range in the environment than previously thought. Mixtures of sea lice treatments have been found to persist together in the environment, and effects of these mixtures on the bacteria A. fischeri can be predicted using the IA equation. Finally, molecular docking may be a suitable tool to predict chronic endocrine disrupting effects and identify varying degrees of impact on the ERα of nine species of aquatic organisms.
Resumo:
Thee rise of computing and the internet have brought about an ethical eld of studies that some term information ethics, computer ethics, digital media ethics, or internet ethics e aim of this contribution is to discuss information ethics’ foundations in the context of the internet’s political economy e chapter rst looks to ground the analysis in a comparison of two information ethics approaches, namely those outlined by Rafael Capurro and Luciano Floridi It then develops, based on these foundations, analyses of the information ethical dimensions of two important areas of social media: one concerns the framing of social media by a surveillance-industrial complex in the context of Edward Snowden’s revelations and the other deals with issues of digital labour processes and issues of class that arises in this context e contribution asks ethical questions about these two phenomena that bring up issues of power, exploitation, and control in the information age It asks if, and if so, how, the approaches of Capurro and Floridi can help us to understand ethico-political aspects of the surveillance-industrial complex and digital labour
Resumo:
Oncological patients are submitted to invasive exams in order to obtain an accurate diagnosis; these procedures may cause maladaptative reactions (fear, anxiety and pain). Particularly in breast cancer, the most common diagnose technique is the incisional biopsy. Most of the patients are unaware about the procedure and for that reason they may focus their thoughts on possible events such as pain, bleeding, the anesthesia, or the later surgical wound care. Anxiety and pain may provoke physiological, behavioral and emotional complications, and because of this reason, the Behavioral Medicine trained psychologist takes an active role before and after the biopsy. The aim of this study was to evaluate the effect of a cognitive-behavioral program to reduce anxiety in women submitted to incisional biopsy for the first time. There were 10 participants from the Hospital Juárez de México, Oncology service; all of them were treated as external patients. The intervention program focused in psycho-education and passive relaxation training using videos, tape-recorded instructions and pamphlets. Anxiety measures were performed using the IDARE-State inventory, and a visual-analogue scale of anxiety (EEF-A), and the measurement of blood pressure and heart rate). Data were analyzed both intrasubject and intersubject using the Wilcoxon test (p≤0.05). The results show a reduction in anxiety (as in punctuation as in ranges) besides, a reduction in the EEF-A.
Resumo:
This paper aims to analyse a sample of Galician co-ops to verify whether or not it is possible to deduce different financial behaviours among co-op partners from the amount of net-surplus. To this end, our study adds net-surplus to the variation registered in some account entries so that other residual incomes yielded by the co-op may be considered. The distribution of these revenues shows that partners do not usually choose to fully anticipate residual incomes. This reveals that some firms follow a positive net-surplus strategy, which is actually different from the null net-surplus strategy asserted by the classical financial theory. Furthermore, results show that differences between both strategies are statistically significant. This opens a path to future research on determinants explaining why co-op partners voluntarily renounce to anticipating these residual incomes. Such behaviour only arises when yearly accounts render a positive result, thereby making the accounting net-surplus a useful tool to analyse financial information in co-op societies.