900 resultados para pacs: data security
Resumo:
Disasters are complex events characterized by damage to key infrastructure and population displacements into disaster shelters. Assessing the living environment in shelters during disasters is a crucial health security concern. Until now, jurisdictional knowledge and preparedness on those assessment methods, or deficiencies found in shelters is limited. A cross-sectional survey (STUSA survey) ascertained knowledge and preparedness for those assessments in all 50 states, DC, and 5 US territories. Descriptive analysis of overall knowledge and preparedness was performed. Fisher’s exact statistics analyzed differences between two groups: jurisdiction type and population size. Two logistic regression models analyzed earthquakes and hurricane risks as predictors of knowledge and preparedness. A convenience sample of state shelter assessments records (n=116) was analyzed to describe environmental health deficiencies found during selected events. Overall, 55 (98%) of jurisdictions responded (states and territories) and appeared to be knowledgeable of these assessments (states 92%, territories 100%, p = 1.000), and engaged in disaster planning with shelter partners (states 96%, territories 83%, p = 0.564). Few had shelter assessment procedures (states 53%, territories 50%, p = 1.000); or training in disaster shelter assessments (states 41%, 60% territories, p = 0.638). Knowledge or preparedness was not predicted by disaster risks, population size, and jurisdiction type in neither model. Knowledge: hurricane (Adjusted OR 0.69, 95% C.I. 0.06-7.88); earthquake (OR 0.82, 95% C.I. 0.17-4.06); and both risks (OR 1.44, 95% C.I. 0.24-8.63); preparedness model: hurricane (OR 1.91, 95% C.I. 0.06-20.69); earthquake (OR 0.47, 95% C.I. 0.7-3.17); and both risks (OR 0.50, 95% C.I. 0.06-3.94). Environmental health deficiencies documented in shelter assessments occurred mostly in: sanitation (30%); facility (17%); food (15%); and sleeping areas (12%); and during ice storms and tornadoes. More research is needed in the area of environmental health assessments of disaster shelters, particularly, in those areas that may provide better insight into the living environment of all shelter occupants and potential effects in disaster morbidity and mortality. Also, to evaluate the effectiveness and usefulness of these assessments methods and the data available on environmental health deficiencies in risk management to protect those at greater risk in shelter facilities during disasters.
Resumo:
Until recently the use of biometrics was restricted to high-security environments and criminal identification applications, for economic and technological reasons. However, in recent years, biometric authentication has become part of daily lives of people. The large scale use of biometrics has shown that users within the system may have different degrees of accuracy. Some people may have trouble authenticating, while others may be particularly vulnerable to imitation. Recent studies have investigated and identified these types of users, giving them the names of animals: Sheep, Goats, Lambs, Wolves, Doves, Chameleons, Worms and Phantoms. The aim of this study is to evaluate the existence of these users types in a database of fingerprints and propose a new way of investigating them, based on the performance of verification between subjects samples. Once introduced some basic concepts in biometrics and fingerprint, we present the biometric menagerie and how to evaluate them.
Resumo:
Until recently the use of biometrics was restricted to high-security environments and criminal identification applications, for economic and technological reasons. However, in recent years, biometric authentication has become part of daily lives of people. The large scale use of biometrics has shown that users within the system may have different degrees of accuracy. Some people may have trouble authenticating, while others may be particularly vulnerable to imitation. Recent studies have investigated and identified these types of users, giving them the names of animals: Sheep, Goats, Lambs, Wolves, Doves, Chameleons, Worms and Phantoms. The aim of this study is to evaluate the existence of these users types in a database of fingerprints and propose a new way of investigating them, based on the performance of verification between subjects samples. Once introduced some basic concepts in biometrics and fingerprint, we present the biometric menagerie and how to evaluate them.
Resumo:
Con l’avvento di Internet, il numero di utenti con un effettivo accesso alla rete e la possibilità di condividere informazioni con tutto il mondo è, negli anni, in continua crescita. Con l’introduzione dei social media, in aggiunta, gli utenti sono portati a trasferire sul web una grande quantità di informazioni personali mettendoli a disposizione delle varie aziende. Inoltre, il mondo dell’Internet Of Things, grazie al quale i sensori e le macchine risultano essere agenti sulla rete, permette di avere, per ogni utente, un numero maggiore di dispositivi, direttamente collegati tra loro e alla rete globale. Proporzionalmente a questi fattori anche la mole di dati che vengono generati e immagazzinati sta aumentando in maniera vertiginosa dando luogo alla nascita di un nuovo concetto: i Big Data. Nasce, di conseguenza, la necessità di far ricorso a nuovi strumenti che possano sfruttare la potenza di calcolo oggi offerta dalle architetture più complesse che comprendono, sotto un unico sistema, un insieme di host utili per l’analisi. A tal merito, una quantità di dati così vasta, routine se si parla di Big Data, aggiunta ad una velocità di trasmissione e trasferimento altrettanto alta, rende la memorizzazione dei dati malagevole, tanto meno se le tecniche di storage risultano essere i tradizionali DBMS. Una soluzione relazionale classica, infatti, permetterebbe di processare dati solo su richiesta, producendo ritardi, significative latenze e inevitabile perdita di frazioni di dataset. Occorre, perciò, far ricorso a nuove tecnologie e strumenti consoni a esigenze diverse dalla classica analisi batch. In particolare, è stato preso in considerazione, come argomento di questa tesi, il Data Stream Processing progettando e prototipando un sistema bastato su Apache Storm scegliendo, come campo di applicazione, la cyber security.
Resumo:
Acknowledgements The authors would like to thank Jonathan Dick, Josie Geris, Jason Lessels, and Claire Tunaley for data collection and Audrey Innes for lab sample preparation. We also thank Christian Birkel for discussions about the model structure and comments on an earlier draft of the paper. Climatic data were provided by Iain Malcolm and Marine Scotland Fisheries at the Freshwater Lab, Pitlochry. Additional precipitation data were provided by the UK Meteorological Office and the British Atmospheric Data Centre (BADC).We thank the European Research Council ERC (project GA 335910 VEWA) for funding the VeWa project.
Resumo:
We thank Dr. R. Yang (formerly at ASU), Dr. R.-Q. Su (formerly at ASU), and Mr. Zhesi Shen for their contributions to a number of original papers on which this Review is partly based. This work was supported by ARO under Grant No. W911NF-14-1-0504. W.-X. Wang was also supported by NSFC under Grants No. 61573064 and No. 61074116, as well as by the Fundamental Research Funds for the Central Universities, Beijing Nova Programme.
Resumo:
To explore the feasibility of processing Compact Muon Solenoid (CMS) analysis jobs across the wide area network, the FIU CMS Tier-3 center and the Florida CMS Tier-2 center designed a remote data access strategy. A Kerberized Lustre test bed was installed at the Tier-2 with the design to provide storage resources to private-facing worker nodes at the Tier-3. However, the Kerberos security layer is not capable of authenticating resources behind a private network. As a remedy, an xrootd server on a public-facing node at the Tier-3 was installed to export the file system to the private-facing worker nodes. We report the performance of CMS analysis jobs processed by the Tier-3 worker nodes accessing data from a Kerberized Lustre file. The processing performance of this configuration is benchmarked against a direct connection to the Lustre file system, and separately, where the xrootd server is near the Lustre file system.
Resumo:
This article discusses the challenges of irregular migration for the security of the EU. They are analyzed starting with the European Security Strategy 2003, and the Report on its Implementation, 2008, and notes many failures: The EU Members did not follow the directives adopted in Brussels, the mismanagement of migration and asylum policies, and numerous actions that can be characterized or described as improvised, scattered or irresponsible. The 2016 Global Strategy recognizes these failures and call attention to the European leaders to reconsider how the EU functions and operates, suggesting the need for greater unity and cooperation to achieve a more effective migration policy. However, the article points out that practically all of the sections of the new Strategy dealing with migration were already embodied in previous Strategies, and stress that in parallel with the publication of the 2016 Global Strategy, actions are already undertaken, such as the EU readmission agreements signed with several important third countries of origin.
Resumo:
Data mining can be defined as the extraction of implicit, previously un-known, and potentially useful information from data. Numerous re-searchers have been developing security technology and exploring new methods to detect cyber-attacks with the DARPA 1998 dataset for Intrusion Detection and the modified versions of this dataset KDDCup99 and NSL-KDD, but until now no one have examined the performance of the Top 10 data mining algorithms selected by experts in data mining. The compared classification learning algorithms in this thesis are: C4.5, CART, k-NN and Naïve Bayes. The performance of these algorithms are compared with accuracy, error rate and average cost on modified versions of NSL-KDD train and test dataset where the instances are classified into normal and four cyber-attack categories: DoS, Probing, R2L and U2R. Additionally the most important features to detect cyber-attacks in all categories and in each category are evaluated with Weka’s Attribute Evaluator and ranked according to Information Gain. The results show that the classification algorithm with best performance on the dataset is the k-NN algorithm. The most important features to detect cyber-attacks are basic features such as the number of seconds of a network connection, the protocol used for the connection, the network service used, normal or error status of the connection and the number of data bytes sent. The most important features to detect DoS, Probing and R2L attacks are basic features and the least important features are content features. Unlike U2R attacks, where the content features are the most important features to detect attacks.
Resumo:
After years of deliberation, the EU commission sped up the reform process of a common EU digital policy considerably in 2015 by launching the EU digital single market strategy. In particular, two core initiatives of the strategy were agreed upon: General Data Protection Regulation and the Network and Information Security (NIS) Directive law texts. A new initiative was additionally launched addressing the role of online platforms. This paper focuses on the platform privacy rationale behind the data protection legislation, primarily based on the proposal for a new EU wide General Data Protection Regulation. We analyse the legislation rationale from an Information System perspective to understand the role user data plays in creating platforms that we identify as “processing silos”. Generative digital infrastructure theories are used to explain the innovative mechanisms that are thought to govern the notion of digitalization and successful business models that are affected by digitalization. We foresee continued judicial data protection challenges with the now proposed Regulation as the adoption of the “Internet of Things” continues. The findings of this paper illustrate that many of the existing issues can be addressed through legislation from a platform perspective. We conclude by proposing three modifications to the governing rationale, which would not only improve platform privacy for the data subject, but also entrepreneurial efforts in developing intelligent service platforms. The first modification is aimed at improving service differentiation on platforms by lessening the ability of incumbent global actors to lock-in the user base to their service/platform. The second modification posits limiting the current unwanted tracking ability of syndicates, by separation of authentication and data store services from any processing entity. Thirdly, we propose a change in terms of how security and data protection policies are reviewed, suggesting a third party auditing procedure.
Resumo:
B-1 Medicaid Reports -- The monthly Medicaid series of eight reports provide summaries of Medicaid eligibles, recipients served, and total payments by county, category of service, and aid category. These reports may also be known as the B-1 Reports. These reports are each available as a PDF for printing or as a CSV file for data analysis. Report Report name IAMM1800-R001--Medically Needy by County - No Spenddown and With Spenddown; IAMM1800-R002--Total Medically Needy, All Other Medicaid, and Grand Total by County; IAMM2200-R002--Monthly Expenditures by Category of Service; IAMM2200-R003--Fiscal YTD Expenditures by Category of Service; IAMM3800-R001--ICF & ICF-MR Vendor Payments by County; IAMM4400-R001--Monthly Expenditures by Eligibility Program; IAMM4400-R002--Monthly Expenditures by Category of Service by Program; IAMM4600-R002--Elderly Waiver Summary by County.
B-1 Monthly Report of Medical Services Provided under Title XIX of the Social Security Act, May 2016
Resumo:
B-1 Medicaid Reports -- The monthly Medicaid series of eight reports provide summaries of Medicaid eligibles, recipients served, and total payments by county, category of service, and aid category. These reports may also be known as the B-1 Reports. These reports are each available as a PDF for printing or as a CSV file for data analysis. Report Report name IAMM1800-R001--Medically Needy by County - No Spenddown and With Spenddown; IAMM1800-R002--Total Medically Needy, All Other Medicaid, and Grand Total by County; IAMM2200-R002--Monthly Expenditures by Category of Service; IAMM2200-R003--Fiscal YTD Expenditures by Category of Service; IAMM3800-R001--ICF & ICF-MR Vendor Payments by County; IAMM4400-R001--Monthly Expenditures by Eligibility Program; IAMM4400-R002--Monthly Expenditures by Category of Service by Program; IAMM4600-R002--Elderly Waiver Summary by County.
Resumo:
B-1 Medicaid Reports -- The monthly Medicaid series of eight reports provide summaries of Medicaid eligibles, recipients served, and total payments by county, category of service, and aid category. These reports may also be known as the B-1 Reports. These reports are each available as a PDF for printing or as a CSV file for data analysis. Report Report name IAMM1800-R001--Medically Needy by County - No Spenddown and With Spenddown; IAMM1800-R002--Total Medically Needy, All Other Medicaid, and Grand Total by County; IAMM2200-R002--Monthly Expenditures by Category of Service; IAMM2200-R003--Fiscal YTD Expenditures by Category of Service; IAMM3800-R001--ICF & ICF-MR Vendor Payments by County; IAMM4400-R001--Monthly Expenditures by Eligibility Program; IAMM4400-R002--Monthly Expenditures by Category of Service by Program; IAMM4600-R002--Elderly Waiver Summary by County.
Resumo:
The generation of heterogeneous big data sources with ever increasing volumes, velocities and veracities over the he last few years has inspired the data science and research community to address the challenge of extracting knowledge form big data. Such a wealth of generated data across the board can be intelligently exploited to advance our knowledge about our environment, public health, critical infrastructure and security. In recent years we have developed generic approaches to process such big data at multiple levels for advancing decision-support. It specifically concerns data processing with semantic harmonisation, low level fusion, analytics, knowledge modelling with high level fusion and reasoning. Such approaches will be introduced and presented in context of the TRIDEC project results on critical oil and gas industry drilling operations and also the ongoing large eVacuate project on critical crowd behaviour detection in confined spaces.