866 resultados para Information Security, Safe Behavior, Users’ behavior, Brazilian users, threats
Resumo:
Limited in motivation and cognitive ability to process the increasing amount of information on their Newsfeed, users apply heuristic processing to form their attitudes. Rather than extensively analysing the content, they increasingly rely on heuristic cues – such as the amount of comments and likes as well as the level of relationship with the “poster” – to process the incoming information. In the paper we explore what impact these heuristic cues have on the affective and cognitive attitude of users towards the posts on their Newsfeed. We conduct a survey on based on a Facebook application that allows users to evaluate Newsfeed posts in real time. Applying two distinct panel-regression methods we report robust results that indicate that there is a certain relationship primacy effect when users are processing information: only if the level of relationship with the “poster” is low, the impact of comments and likes on the attitude is considered, whereby likes trigger positive, whereas comments – negative evaluations.
Resumo:
The problem of information overload on Facebook is exacerbating as users expand their networks. Growing quantity and increasingly poor quality of information on the Newsfeed may interfere with the hedonic experience of users resulting in frustration and dissatisfaction. In the long run, such developments threaten to undermine sustainability of the platform. To address these issues, our study adopts a grounded theory approach to explore the phenomenon of information overload on Facebook. We investigate main sources of information overload, identify strategies users adopt to deal with it as well as possible consequences. In-depth analysis of the phenomenon allows us to uncover individual peculiarities for identification of relevant information. Based on them we provide valuable recommendations for network providers.
Resumo:
This article proposes an innovative biometric technique based on the idea of authenticating a person on a mobile device by gesture recognition. To accomplish this aim, a user is prompted to be recognized by a gesture he/she performs moving his/her hand while holding a mobile device with an accelerometer embedded. As users are not able to repeat a gesture exactly in the air, an algorithm based on sequence alignment is developed to correct slight differences between repetitions of the same gesture. The robustness of this biometric technique has been studied within 2 different tests analyzing a database of 100 users with real falsifications. Equal Error Rates of 2.01 and 4.82% have been obtained in a zero-effort and an active impostor attack, respectively. A permanence evaluation is also presented from the analysis of the repetition of the gestures of 25 users in 10 sessions over a month. Furthermore, two different gesture databases have been developed: one made up of 100 genuine identifying 3-D hand gestures and 3 impostors trying to falsify each of them and another with 25 volunteers repeating their identifying 3- D hand gesture in 10 sessions over a month. These databases are the most extensive in published studies, to the best of our knowledge.
Resumo:
En los últimos dos años la formación online ha experimentado un auge significativo gracias al paradigma de formación denominado MOOC (Massive Open Online Course). Un MOOC simplifica la formación a distancia gracias a sus características de abierto, colaborativo, masivo y gratuito. Por desgracia en lengua española la utilización de este tipo de recurso formativo todavía es minoritaria. La presente investigación recoge una experiencia de innovación educativa en la que se destaca el diseño, implementación y difusión del primer MOOC en lengua española dedicado a la seguridad de la información por parte de la Universidad Politécnica de Madrid. Esta experiencia aborda las ventajas e inconvenientes de este tipo de recurso formativo, debatiendo aspectos vitales para la elaboración prolongada de cursos tipo MOOC, como son la interactividad y retroalimentación con los usuarios del curso o la forma más adecuada de representar el contenido docente. Todas estas cuestiones reciben respuesta en el MOOC Crypt4you, que con más de 10 meses de vida permite vislumbrar un gran éxito de este tipo de cursos en la formación, al menos en lengua española, de hispanohablantes. ABSTRACT In the last two years, online training has boomed thanks to significant training paradigm called MOOC (Massive Open Online Course). A MOOC simplifies remote training thanks to its open, collaborativend free mass features. Unfortunately, the use of this kind of learning resource in Spanish teaching is still a minority. This research includes an educational experience which emphasizes the design, implementation and dissemination of the first Spanish-language MOOC devoted to information security by the Technical University of Madrid. This experiment addresses the advantages and disadvantages of this kind of learning resource discussing critical issues for developing type MOOC prolonged courses, such as interactivity and feedback from users of the course or the most appropriate way of representing the educational content. All these questions are answered in the MOOC Crypt4you which, over its 10 months of life, can glimpse a great success of this type of training course, at least in Spanish, Spanish-speakers.
Resumo:
This dissertation introduces an approach to generate tests to test fail-safe behavior for web applications. We apply the approach to a commercial web application. We build models for both behavioral and mitigation requirements. We create mitigation tests from an existing functional black box test suite by determining failure type and points of failure in the test suite and weaving required mitigation based on weaving rules to generate a test suite that tests proper mitigation of failures. A genetic algorithm (GA) is used to determine points of failure and type of failure that needs to be tested. Mitigation test paths are woven into the behavioral test at the point of failure based on failure specific weaving rules. A simulator was developed to evaluate choice of parameters for the genetic algorithm. We showed how to tune the fitness function and performed tuning experiments for GA to determine what values to use for exploration weight and prospecting weight. We found that higher defect densities make prospecting and mining more successful, while lower mitigation defect densities need more exploration. We compare efficiency and effectiveness of the approach. First, the GA approach is compared to random selection. The results show that the GA performance was better than random selection and that the approach was robust when the search space increased. Second, we compare the GA against four coverage criteria. The results of comparison show that test requirements generated by a genetic algorithm (GA) are more efficient than three of the four coverage criteria for large search spaces. They are equally effective. For small search spaces, the genetic algorithm is less effective than three of the four coverage criteria. The fourth coverage criteria is too weak and unable to find all defects in almost all cases. We also present a large case study of a mortgage system at one of our industrial partners and show how we formalize the approach. We evaluate the use of a GA to create test requirements. The evaluation includes choice of initial population, multiplicity of runs and a discussion of the cost of evaluating fitness. Finally, we build a selective regression testing approach based on types of changes (add, delete, or modify) that could occur in the behavioral model, the fault model, the mitigation models, the weaving rules, and the state-event matrix. We provide a systematic method by showing the formalization steps for each type of change to the various models.
How Does the Denver Public Library System Respond to its Customer's Requests for Global Information?
Resumo:
The interaction between globally available information and public library users is a changing one. Global information is readily available yet provider and user struggle to find efficiencies of time and resources. As a primary resource of global information the Denver Public Library (DPL) is approaching this challenge by providing changing technology to a changing user and by providing a customized approach to immigrant populations. DPL provides global information to library users through collections, programs and Internet. Internet and collections global information usage cannot be directly measured due to privacy restrictions. Only 12.5% of general user programs focus on global information. Four percent of budget serves the immigrant users. This is greater than national averages.
Resumo:
"April 17, 1995"--Added t.p.
Resumo:
Monitoring is essential for conservation of sites, but capacity to undertake it in the field is often limited. Data collected by remote sensing has been identified as a partial solution to this problem, and is becoming a feasible option, since increasing quantities of satellite data in particular are becoming available to conservationists. When suitably classified, satellite imagery can be used to delineate land cover types such as forest, and to identify any changes over time. However, the conservation community lacks (a) a simple tool appropriate to the needs for monitoring change in all types of land cover (e.g. not just forest), and (b) an easily accessible information system which allows for simple land cover change analysis and data sharing to reduce duplication of effort. To meet these needs, we developed a web-based information system which allows users to assess land cover dynamics in and around protected areas (or other sites of conservation importance) from multi-temporal medium resolution satellite imagery. The system is based around an open access toolbox that pre-processes and classifies Landsat-type imagery, and then allows users to interactively verify the classification. These data are then open for others to utilize through the online information system. We first explain imagery processing and data accessibility features, and then demonstrate the toolbox and the value of user verification using a case study on Nakuru National Park, Kenya. Monitoring and detection of disturbances can support implementation of effective protection, assist the work of park managers and conservation scientists, and thus contribute to conservation planning, priority assessment and potentially to meeting monitoring needs for Aichi target 11.
Resumo:
Kernel-level malware is one of the most dangerous threats to the security of users on the Internet, so there is an urgent need for its detection. The most popular detection approach is misuse-based detection. However, it cannot catch up with today's advanced malware that increasingly apply polymorphism and obfuscation. In this thesis, we present our integrity-based detection for kernel-level malware, which does not rely on the specific features of malware. We have developed an integrity analysis system that can derive and monitor integrity properties for commodity operating systems kernels. In our system, we focus on two classes of integrity properties: data invariants and integrity of Kernel Queue (KQ) requests. We adopt static analysis for data invariant detection and overcome several technical challenges: field-sensitivity, array-sensitivity, and pointer analysis. We identify data invariants that are critical to system runtime integrity from Linux kernel 2.4.32 and Windows Research Kernel (WRK) with very low false positive rate and very low false negative rate. We then develop an Invariant Monitor to guard these data invariants against real-world malware. In our experiment, we are able to use Invariant Monitor to detect ten real-world Linux rootkits and nine real-world Windows malware and one synthetic Windows malware. We leverage static and dynamic analysis of kernel and device drivers to learn the legitimate KQ requests. Based on the learned KQ requests, we build KQguard to protect KQs. At runtime, KQguard rejects all the unknown KQ requests that cannot be validated. We apply KQguard on WRK and Linux kernel, and extensive experimental evaluation shows that KQguard is efficient (up to 5.6% overhead) and effective (capable of achieving zero false positives against representative benign workloads after appropriate training and very low false negatives against 125 real-world malware and nine synthetic attacks). In our system, Invariant Monitor and KQguard cooperate together to protect data invariants and KQs in the target kernel. By monitoring these integrity properties, we can detect malware by its violation of these integrity properties during execution.
Resumo:
This study examines the factors that influence public managers in the adoption of advanced practices related to Information Security Management. This research used, as the basis of assertions, Security Standard ISO 27001:2005 and theoretical model based on TAM (Technology Acceptance Model) from Venkatesh and Davis (2000). The method adopted was field research of national scope with participation of eighty public administrators from states of Brazil, all of them managers and planners of state governments. The approach was quantitative and research methods were descriptive statistics, factor analysis and multiple linear regression for data analysis. The survey results showed correlation between the constructs of the TAM model (ease of use, perceptions of value, attitude and intention to use) and agreement with the assertions made in accordance with ISO 27001, showing that these factors influence the managers in adoption of such practices. On the other independent variables of the model (organizational profile, demographic profile and managers behavior) no significant correlation was identified with the assertions of the same standard, witch means the need for expansion researches using such constructs. It is hoped that this study may contribute positively to the progress on discussions about Information Security Management, Adoption of Safety Standards and Technology Acceptance Model
Resumo:
Dissertação apresentada à Escola Superior de Tecnologia do Instituto Politécnico de Castelo Branco para cumprimento dos requisitos necessários à obtenção do grau de Mestre em Desenvolvimento de Software e Sistemas Interativos, realizada sob a orientação científica Professor Doutor Osvaldo Arede dos Santos, do Instituto Politécnico de Castelo Branco.
Resumo:
Database schemas, in many organizations, are considered one of the critical assets to be protected. From database schemas, it is not only possible to infer the information being collected but also the way organizations manage their businesses and/or activities. One of the ways to disclose database schemas is through the Create, Read, Update and Delete (CRUD) expressions. In fact, their use can follow strict security rules or be unregulated by malicious users. In the first case, users are required to master database schemas. This can be critical when applications that access the database directly, which we call database interface applications (DIA), are developed by third party organizations via outsourcing. In the second case, users can disclose partially or totally database schemas following malicious algorithms based on CRUD expressions. To overcome this vulnerability, we propose a new technique where CRUD expressions cannot be directly manipulated by DIAs any more. Whenever a DIA starts-up, the associated database server generates a random codified token for each CRUD expression and sends it to the DIA that the database servers can use to execute the correspondent CRUD expression. In order to validate our proposal, we present a conceptual architectural model and a proof of concept.
Resumo:
In database applications, access control security layers are mostly developed from tools provided by vendors of database management systems and deployed in the same servers containing the data to be protected. This solution conveys several drawbacks. Among them we emphasize: 1) if policies are complex, their enforcement can lead to performance decay of database servers; 2) when modifications in the established policies implies modifications in the business logic (usually deployed at the client-side), there is no other possibility than modify the business logic in advance and, finally, 3) malicious users can issue CRUD expressions systematically against the DBMS expecting to identify any security gap. In order to overcome these drawbacks, in this paper we propose an access control stack characterized by: most of the mechanisms are deployed at the client-side; whenever security policies evolve, the security mechanisms are automatically updated at runtime and, finally, client-side applications do not handle CRUD expressions directly. We also present an implementation of the proposed stack to prove its feasibility. This paper presents a new approach to enforce access control in database applications, this way expecting to contribute positively to the state of the art in the field.
Resumo:
In database applications, access control security layers are mostly developed from tools provided by vendors of database management systems and deployed in the same servers containing the data to be protected. This solution conveys several drawbacks. Among them we emphasize: (1) if policies are complex, their enforcement can lead to performance decay of database servers; (2) when modifications in the established policies implies modifications in the business logic (usually deployed at the client-side), there is no other possibility than modify the business logic in advance and, finally, 3) malicious users can issue CRUD expressions systematically against the DBMS expecting to identify any security gap. In order to overcome these drawbacks, in this paper we propose an access control stack characterized by: most of the mechanisms are deployed at the client-side; whenever security policies evolve, the security mechanisms are automatically updated at runtime and, finally, client-side applications do not handle CRUD expressions directly. We also present an implementation of the proposed stack to prove its feasibility. This paper presents a new approach to enforce access control in database applications, this way expecting to contribute positively to the state of the art in the field.
Resumo:
This study examines the factors that influence public managers in the adoption of advanced practices related to Information Security Management. This research used, as the basis of assertions, Security Standard ISO 27001:2005 and theoretical model based on TAM (Technology Acceptance Model) from Venkatesh and Davis (2000). The method adopted was field research of national scope with participation of eighty public administrators from states of Brazil, all of them managers and planners of state governments. The approach was quantitative and research methods were descriptive statistics, factor analysis and multiple linear regression for data analysis. The survey results showed correlation between the constructs of the TAM model (ease of use, perceptions of value, attitude and intention to use) and agreement with the assertions made in accordance with ISO 27001, showing that these factors influence the managers in adoption of such practices. On the other independent variables of the model (organizational profile, demographic profile and managers behavior) no significant correlation was identified with the assertions of the same standard, witch means the need for expansion researches using such constructs. It is hoped that this study may contribute positively to the progress on discussions about Information Security Management, Adoption of Safety Standards and Technology Acceptance Model