282 resultados para malicious gossip
Resumo:
Recent advances in the massively parallel computational abilities of graphical processing units (GPUs) have increased their use for general purpose computation, as companies look to take advantage of big data processing techniques. This has given rise to the potential for malicious software targeting GPUs, which is of interest to forensic investigators examining the operation of software. The ability to carry out reverse-engineering of software is of great importance within the security and forensics elds, particularly when investigating malicious software or carrying out forensic analysis following a successful security breach. Due to the complexity of the Nvidia CUDA (Compute Uni ed Device Architecture) framework, it is not clear how best to approach the reverse engineering of a piece of CUDA software. We carry out a review of the di erent binary output formats which may be encountered from the CUDA compiler, and their implications on reverse engineering. We then demonstrate the process of carrying out disassembly of an example CUDA application, to establish the various techniques available to forensic investigators carrying out black-box disassembly and reverse engineering of CUDA binaries. We show that the Nvidia compiler, using default settings, leaks useful information. Finally, we demonstrate techniques to better protect intellectual property in CUDA algorithm implementations from reverse engineering.
Resumo:
Security Onion is a Network Security Manager (NSM) platform that provides multiple Intrusion Detection Systems (IDS) including Host IDS (HIDS) and Network IDS (NIDS). Many types of data can be acquired using Security Onion for analysis. This includes data related to: Host, Network, Session, Asset, Alert and Protocols. Security Onion can be implemented as a standalone deployment with server and sensor included or with a master server and multiple sensors allowing for the system to be scaled as required. Many interfaces and tools are available for management of the system and analysis of data such as Sguil, Snorby, Squert and Enterprise Log Search and Archive (ELSA). These interfaces can be used for analysis of alerts and captured events and then can be further exported for analysis in Network Forensic Analysis Tools (NFAT) such as NetworkMiner, CapME or Xplico. The Security Onion platform also provides various methods of management such as Secure SHell (SSH) for management of server and sensors and Web client remote access. All of this with the ability to replay and analyse example malicious traffic makes the Security Onion a suitable low cost alternative for Network Security Management. In this paper, we have a feature and functionality review for the Security Onion in terms of: types of data, configuration, interface, tools and system management.
Resumo:
Contemporary integrated circuits are designed and manufactured in a globalized environment leading to concerns of piracy, overproduction and counterfeiting. One class of techniques to combat these threats is circuit obfuscation which seeks to modify the gate-level (or structural) description of a circuit without affecting its functionality in order to increase the complexity and cost of reverse engineering. Most of the existing circuit obfuscation methods are based on the insertion of additional logic (called “key gates”) or camouflaging existing gates in order to make it difficult for a malicious user to get the complete layout information without extensive computations to determine key-gate values. However, when the netlist or the circuit layout, although camouflaged, is available to the attacker, he/she can use advanced logic analysis and circuit simulation tools and Boolean SAT solvers to reveal the unknown gate-level information without exhaustively trying all the input vectors, thus bringing down the complexity of reverse engineering. To counter this problem, some ‘provably secure’ logic encryption algorithms that emphasize methodical selection of camouflaged gates have been proposed previously in literature [1,2,3]. The contribution of this paper is the creation and simulation of a new layout obfuscation method that uses don't care conditions. We also present proof-of-concept of a new functional or logic obfuscation technique that not only conceals, but modifies the circuit functionality in addition to the gate-level description, and can be implemented automatically during the design process. Our layout obfuscation technique utilizes don’t care conditions (namely, Observability and Satisfiability Don’t Cares) inherent in the circuit to camouflage selected gates and modify sub-circuit functionality while meeting the overall circuit specification. Here, camouflaging or obfuscating a gate means replacing the candidate gate by a 4X1 Multiplexer which can be configured to perform all possible 2-input/ 1-output functions as proposed by Bao et al. [4]. It is important to emphasize that our approach not only obfuscates but alters sub-circuit level functionality in an attempt to make IP piracy difficult. The choice of gates to obfuscate determines the effort required to reverse engineer or brute force the design. As such, we propose a method of camouflaged gate selection based on the intersection of output logic cones. By choosing these candidate gates methodically, the complexity of reverse engineering can be made exponential, thus making it computationally very expensive to determine the true circuit functionality. We propose several heuristic algorithms to maximize the RE complexity based on don’t care based obfuscation and methodical gate selection. Thus, the goal of protecting the design IP from malicious end-users is achieved. It also makes it significantly harder for rogue elements in the supply chain to use, copy or replicate the same design with a different logic. We analyze the reverse engineering complexity by applying our obfuscation algorithm on ISCAS-85 benchmarks. Our experimental results indicate that significant reverse engineering complexity can be achieved at minimal design overhead (average area overhead for the proposed layout obfuscation methods is 5.51% and average delay overhead is about 7.732%). We discuss the strengths and limitations of our approach and suggest directions that may lead to improved logic encryption algorithms in the future. References: [1] R. Chakraborty and S. Bhunia, “HARPOON: An Obfuscation-Based SoC Design Methodology for Hardware Protection,” IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, vol. 28, no. 10, pp. 1493–1502, 2009. [2] J. A. Roy, F. Koushanfar, and I. L. Markov, “EPIC: Ending Piracy of Integrated Circuits,” in 2008 Design, Automation and Test in Europe, 2008, pp. 1069–1074. [3] J. Rajendran, M. Sam, O. Sinanoglu, and R. Karri, “Security Analysis of Integrated Circuit Camouflaging,” ACM Conference on Computer Communications and Security, 2013. [4] Bao Liu, Wang, B., "Embedded reconfigurable logic for ASIC design obfuscation against supply chain attacks,"Design, Automation and Test in Europe Conference and Exhibition (DATE), 2014 , vol., no., pp.1,6, 24-28 March 2014.
Resumo:
Authentication plays an important role in how we interact with computers, mobile devices, the web, etc. The idea of authentication is to uniquely identify a user before granting access to system privileges. For example, in recent years more corporate information and applications have been accessible via the Internet and Intranet. Many employees are working from remote locations and need access to secure corporate files. During this time, it is possible for malicious or unauthorized users to gain access to the system. For this reason, it is logical to have some mechanism in place to detect whether the logged-in user is the same user in control of the user's session. Therefore, highly secure authentication methods must be used. We posit that each of us is unique in our use of computer systems. It is this uniqueness that is leveraged to "continuously authenticate users" while they use web software. To monitor user behavior, n-gram models are used to capture user interactions with web-based software. This statistical language model essentially captures sequences and sub-sequences of user actions, their orderings, and temporal relationships that make them unique by providing a model of how each user typically behaves. Users are then continuously monitored during software operations. Large deviations from "normal behavior" can possibly indicate malicious or unintended behavior. This approach is implemented in a system called Intruder Detector (ID) that models user actions as embodied in web logs generated in response to a user's actions. User identification through web logs is cost-effective and non-intrusive. We perform experiments on a large fielded system with web logs of approximately 4000 users. For these experiments, we use two classification techniques; binary and multi-class classification. We evaluate model-specific differences of user behavior based on coarse-grain (i.e., role) and fine-grain (i.e., individual) analysis. A specific set of metrics are used to provide valuable insight into how each model performs. Intruder Detector achieves accurate results when identifying legitimate users and user types. This tool is also able to detect outliers in role-based user behavior with optimal performance. In addition to web applications, this continuous monitoring technique can be used with other user-based systems such as mobile devices and the analysis of network traffic.
Resumo:
We propose three research problems to explore the relations between trust and security in the setting of distributed computation. In the first problem, we study trust-based adversary detection in distributed consensus computation. The adversaries we consider behave arbitrarily disobeying the consensus protocol. We propose a trust-based consensus algorithm with local and global trust evaluations. The algorithm can be abstracted using a two-layer structure with the top layer running a trust-based consensus algorithm and the bottom layer as a subroutine executing a global trust update scheme. We utilize a set of pre-trusted nodes, headers, to propagate local trust opinions throughout the network. This two-layer framework is flexible in that it can be easily extensible to contain more complicated decision rules, and global trust schemes. The first problem assumes that normal nodes are homogeneous, i.e. it is guaranteed that a normal node always behaves as it is programmed. In the second and third problems however, we assume that nodes are heterogeneous, i.e, given a task, the probability that a node generates a correct answer varies from node to node. The adversaries considered in these two problems are workers from the open crowd who are either investing little efforts in the tasks assigned to them or intentionally give wrong answers to questions. In the second part of the thesis, we consider a typical crowdsourcing task that aggregates input from multiple workers as a problem in information fusion. To cope with the issue of noisy and sometimes malicious input from workers, trust is used to model workers' expertise. In a multi-domain knowledge learning task, however, using scalar-valued trust to model a worker's performance is not sufficient to reflect the worker's trustworthiness in each of the domains. To address this issue, we propose a probabilistic model to jointly infer multi-dimensional trust of workers, multi-domain properties of questions, and true labels of questions. Our model is very flexible and extensible to incorporate metadata associated with questions. To show that, we further propose two extended models, one of which handles input tasks with real-valued features and the other handles tasks with text features by incorporating topic models. Our models can effectively recover trust vectors of workers, which can be very useful in task assignment adaptive to workers' trust in the future. These results can be applied for fusion of information from multiple data sources like sensors, human input, machine learning results, or a hybrid of them. In the second subproblem, we address crowdsourcing with adversaries under logical constraints. We observe that questions are often not independent in real life applications. Instead, there are logical relations between them. Similarly, workers that provide answers are not independent of each other either. Answers given by workers with similar attributes tend to be correlated. Therefore, we propose a novel unified graphical model consisting of two layers. The top layer encodes domain knowledge which allows users to express logical relations using first-order logic rules and the bottom layer encodes a traditional crowdsourcing graphical model. Our model can be seen as a generalized probabilistic soft logic framework that encodes both logical relations and probabilistic dependencies. To solve the collective inference problem efficiently, we have devised a scalable joint inference algorithm based on the alternating direction method of multipliers. The third part of the thesis considers the problem of optimal assignment under budget constraints when workers are unreliable and sometimes malicious. In a real crowdsourcing market, each answer obtained from a worker incurs cost. The cost is associated with both the level of trustworthiness of workers and the difficulty of tasks. Typically, access to expert-level (more trustworthy) workers is more expensive than to average crowd and completion of a challenging task is more costly than a click-away question. In this problem, we address the problem of optimal assignment of heterogeneous tasks to workers of varying trust levels with budget constraints. Specifically, we design a trust-aware task allocation algorithm that takes as inputs the estimated trust of workers and pre-set budget, and outputs the optimal assignment of tasks to workers. We derive the bound of total error probability that relates to budget, trustworthiness of crowds, and costs of obtaining labels from crowds naturally. Higher budget, more trustworthy crowds, and less costly jobs result in a lower theoretical bound. Our allocation scheme does not depend on the specific design of the trust evaluation component. Therefore, it can be combined with generic trust evaluation algorithms.
Resumo:
SQL Injection Attack (SQLIA) remains a technique used by a computer network intruder to pilfer an organisation’s confidential data. This is done by an intruder re-crafting web form’s input and query strings used in web requests with malicious intent to compromise the security of an organisation’s confidential data stored at the back-end database. The database is the most valuable data source, and thus, intruders are unrelenting in constantly evolving new techniques to bypass the signature’s solutions currently provided in Web Application Firewalls (WAF) to mitigate SQLIA. There is therefore a need for an automated scalable methodology in the pre-processing of SQLIA features fit for a supervised learning model. However, obtaining a ready-made scalable dataset that is feature engineered with numerical attributes dataset items to train Artificial Neural Network (ANN) and Machine Leaning (ML) models is a known issue in applying artificial intelligence to effectively address ever evolving novel SQLIA signatures. This proposed approach applies numerical attributes encoding ontology to encode features (both legitimate web requests and SQLIA) to numerical data items as to extract scalable dataset for input to a supervised learning model in moving towards a ML SQLIA detection and prevention model. In numerical attributes encoding of features, the proposed model explores a hybrid of static and dynamic pattern matching by implementing a Non-Deterministic Finite Automaton (NFA). This combined with proxy and SQL parser Application Programming Interface (API) to intercept and parse web requests in transition to the back-end database. In developing a solution to address SQLIA, this model allows processed web requests at the proxy deemed to contain injected query string to be excluded from reaching the target back-end database. This paper is intended for evaluating the performance metrics of a dataset obtained by numerical encoding of features ontology in Microsoft Azure Machine Learning (MAML) studio using Two-Class Support Vector Machines (TCSVM) binary classifier. This methodology then forms the subject of the empirical evaluation.
Resumo:
During the early Stuart period, England’s return to male monarchal rule resulted in the emergence of a political analogy that understood the authority of the monarch to be rooted in the “natural” authority of the father; consequently, the mother’s authoritative role within the family was repressed. As the literature of the period recognized, however, there would be no family unit for the father to lead without the words and bodies of women to make narratives of dynasty and legitimacy possible. Early modern discourse reveals that the reproductive roles of men and women, and the social hierarchies that grow out of them, are as much a matter of human design as of divine or natural law. Moreover, despite the attempts of James I and Charles I to strengthen royal patriarchal authority, the role of the monarch was repeatedly challenged on stage and in print even prior to the British Civil Wars and the 1649 beheading of Charles I. Texts produced at moments of political crisis reveal how women could uphold the legitimacy of familial and political hierarchies, but they also disclose patriarchy’s limits by representing “natural” male authority as depending in part on women’s discursive control over their bodies. Due to the epistemological instability of the female reproductive body, women play a privileged interpretive role in constructing patriarchal identities. The dearth of definitive knowledge about the female body during this period, and the consequent inability to fix or stabilize somatic meaning, led to the proliferation of differing, and frequently contradictory, depictions of women’s bodies. The female body became a site of contested meaning in early modern discourse, with men and women struggling for dominance, and competitors so diverse as to include kings, midwives, scholars of anatomy, and female religious sectarians. Essentially, this competition came down to a question of where to locate somatic meaning: In the opaque, uncertain bodies of women? In women’s equally uncertain and unreliable words? In the often contradictory claims of various male-authored medical treatises? In the whispered conversations that took place between women behind the closed doors of birthing rooms? My dissertation traces this representational instability through plays by William Shakespeare, John Ford, Thomas Middleton, and William Rowley, as well as in monstrous birth pamphlets, medical treatises, legal documents, histories, satires, and ballads. In these texts, the stories women tell about and through their bodies challenge and often supersede male epistemological control. These stories, which I term female bodily narratives, allow women to participate in defining patriarchal authority at the levels of both the family and the state. After laying out these controversies and instabilities surrounding early modern women’s bodies in my first chapter, my remaining chapters analyze the impact of women’s words on four distinct but overlapping reproductive issues: virginity, pregnancy, birthing room rituals, and paternity. In chapters 2 and 3, I reveal how women construct the inner, unseen “truths” of their reproductive bodies through speech and performance, and in doing so challenge the traditional forms of male authority that depend on these very constructions for coherence. Chapter 2 analyzes virginity in Thomas Middleton and William Rowley’s play The Changeling (1622) and in texts documenting the 1613 Essex divorce, during which Frances Howard, like Beatrice-Joanna in the play, was required to undergo a virginity test. These texts demonstrate that a woman’s ability to feign virginity could allow her to undermine patriarchal authority within the family and the state, even as they reveal how men relied on women to represent their reproductive bodies in socially stabilizing ways. During the British Civil Wars and Interregnum (1642-1660), Parliamentary writers used Howard as an example of how the unruly words and bodies of women could disrupt and transform state politics by influencing court faction; in doing so, they also revealed how female bodily narratives could help recast political historiography. In chapter 3, I investigate depictions of pregnancy in John Ford’s tragedy, ‘Tis Pity She’s a Whore (1633) and in early modern medical treatises from 1604 to 1651. Although medical texts claim to convey definitive knowledge about the female reproductive body, in actuality male knowledge frequently hinged on the ways women chose to interpret the unstable physical indicators of pregnancy. In Ford’s play, Annabella and Putana take advantage of male ignorance in order to conceal Annabella’s incestuous, illegitimate pregnancy from her father and husband, thus raising fears about women’s ability to misrepresent their bodies. Since medical treatises often frame the conception of healthy, legitimate offspring as a matter of national importance, women’s ability to conceal or even terminate their pregnancies could weaken both the patriarchal family and the patriarchal state that the family helped found. Chapters 4 and 5 broaden the socio-political ramifications of women’s words and bodies by demonstrating how female bodily narratives are required to establish paternity and legitimacy, and thus help shape patriarchal authority at multiple social levels. In chapter 4, I study representations of birthing room gossip in Thomas Middleton’s play, A Chaste Maid in Cheapside (1613), and in three Mistris Parliament pamphlets (1648) that satirize parliamentary power. Across these texts, women’s birthing room “gossip” comments on and critiques such issues as men’s behavior towards their wives and children, the proper use of household funds, the finer points of religious ritual, and even the limits of the authority of the monarch. The collective speech of the female-dominated birthing room thus proves central not only to attributing paternity to particular men, but also to the consequent definition and establishment of the political, socio-economic, and domestic roles of patriarchy. Chapter 5 examines anxieties about paternity in William Shakespeare’s The Winter’s Tale (1611) and in early modern monstrous birth pamphlets from 1600 to 1647, in which children born with congenital deformities are explained as God’s punishment for the sexual, religious, and/or political transgressions of their parents or communities. Both the play and the pamphlets explore the formative/deformative power of women’s words and bodies over their offspring, a power that could obscure a father’s connection to his children. However, although the pamphlets attempt to contain and discipline women’s unruly words and bodies with the force of male authority, the play reveals the dangers of male tyranny and the crucial role of maternal authority in reproducing and authenticating dynastic continuity and royal legitimacy. My emphasis on the socio-political impact of women’s self-representation distinguishes my work from that of scholars such as Mary Fissell and Julie Crawford, who claim that early modern beliefs about the female reproductive body influenced textual depictions of major religious and political events, but give little sustained attention to the role female speech plays in these representations. In contrast, my dissertation reveals that in such texts, patriarchal society relies precisely on the words women speak about their own and other women’s bodies. Ultimately, I argue that female bodily narratives were crucial in shaping early modern culture, and they are equally crucial to our critical understanding of sexual and state politics in the literature of the period.
Resumo:
Network intrusion detection systems are themselves becoming targets of attackers. Alert flood attacks may be used to conceal malicious activity by hiding it among a deluge of false alerts sent by the attacker. Although these types of attacks are very hard to stop completely, our aim is to present techniques that improve alert throughput and capacity to such an extent that the resources required to successfully mount the attack become prohibitive. The key idea presented is to combine a token bucket filter with a realtime correlation algorithm. The proposed algorithm throttles alert output from the IDS when an attack is detected. The attack graph used in the correlation algorithm is used to make sure that alerts crucial to forming strategies are not discarded by throttling.
Resumo:
The mediaeval interpreters of Roman law have worked out the dolus re ipsa-concept to explain the mysterious laesio enormis (C. 4.44.2 [a. 285]). The inequality in exchange has been supposed then to be a result of malicious undertaking, for which paradoxically, no one was personally liable (Ulp. 45 ad Sab. D. 45.1.36). In course of time, the incorporation of laesio enormis into the scheme of dolus turned into a presumption of a malicious act on the part of the enriched party, even though, the laesio enormis is free from subjective criteria. It is astonishing how little the dolus re ipsa is discussed, although the modern paradigm for correcting inequality in exchange is based on same assumptions. This ‘Wiederkehr der Rechtsfigur’ certainly deserves more attention.
Resumo:
Advances in digital photography and distribution technologies enable many people to produce and distribute images of their sex acts. When teenagers do this, the photos and videos they create can be legally classified as child pornography since the law makes no exception for youth who create sexually explicit images of themselves. The dominant discussions about teenage girls producing sexually explicit media (including sexting) are profoundly unproductive: (1) they blame teenage girls for creating private images that another person later maliciously distributed and (2) they fail to respect—or even discuss—teenagers’ rights to freedom of expression. Cell phones and the internet make producing and distributing images extremely easy, which provide widely accessible venues for both consensual sexual expression between partners and for sexual harassment. Dominant understandings view sexting as a troubling teenage trend created through the combination of camera phones and adolescent hormones and impulsivity, but this view often conflates consensual sexting between partners with the malicious distribution of a person’s private image as essentially equivalent behaviors. In this project, I ask: What is the role of assumptions about teen girls’ sexual agency in these problematic understandings of sexting that blame victims and deny teenagers’ rights? In contrast to the popular media panic about online predators and the familiar accusation that youth are wasting their leisure time by using digital media, some people champion the internet as a democratic space that offers young people the opportunity to explore identities and develop social and communication skills. Yet, when teen girls’ sexuality enters this conversation, all this debate and discussion narrows to a problematic consensus. The optimists about adolescents and technology fall silent, and the argument that media production is inherently empowering for girls does not seem to apply to a girl who produces a sexually explicit image of herself. Instead, feminist, popular, and legal commentaries assert that she is necessarily a victim: of a “sexualized” mass media, pressure from her male peers, digital technology, her brain structures or hormones, or her own low self-esteem and misplaced desire for attention. Why and how are teenage girls’ sexual choices produced as evidence of their failure or success in achieving Western liberal ideals of self-esteem, resistance, and agency? Since mass media and policy reactions to sexting have so far been overwhelmingly sexist and counter-productive, it is crucial to interrogate the concepts and assumptions that characterize mainstream understandings of sexting. I argue that the common sense that is co-produced by law and mass media underlies the problematic legal and policy responses to sexting. Analyzing a range of nonfiction texts including newspaper articles, talk shows, press releases, public service announcements, websites, legislative debates, and legal documents, I investigate gendered, racialized, age-based, and technologically determinist common sense assumptions about teenage girls’ sexual agency. I examine the consensus and continuities that exist between news, nonfiction mass media, policy, institutions, and law, and describe the limits of their debates. I find that this early 21st century post-feminist girl-power moment not only demands that girls live up to gendered sexual ideals but also insists that actively choosing to follow these norms is the only way to exercise sexual agency. This is the first study to date examining the relationship of conventional wisdom about digital media and teenage girls’ sexuality to both policy and mass media.
Resumo:
Network Intrusion Detection Systems (NIDS) monitor a net- work with the aim of discerning malicious from benign activity on that network. While a wide range of approaches have met varying levels of success, most IDS’s rely on having access to a database of known attack signatures which are written by security experts. Nowadays, in order to solve problems with false positive alerts, correlation algorithms are used to add additional structure to sequences of IDS alerts. However, such techniques are of no help in discovering novel attacks or variations of known attacks, something the human immune system (HIS) is capable of doing in its own specialised domain. This paper presents a novel immune algorithm for application to an intrusion detection problem. The goal is to discover packets containing novel variations of attacks covered by an existing signature base.
Resumo:
The phenomenon of terrorism is one of the most asymmetrical, amorphous and hybrid threats to international security. At the beginning of the 21st century, terrorism grew to a pandemic. Ensuring freedom and security of individuals and nations has become one of the priority postulates. Terrorism steps out of all legal and analytic-descriptive standards. An immanent feature of terrorism, e.g. is constant conversion into malicious forms of violence. One of the most alarming changes is a tendency for debasement of essence of law, a state and human rights Assurance of safety in widely accessible public places and in private life forces creation of various institutions, methods and forms of people control. However, one cannot in an arbitrary way limit civil freedom. Presented article stresses the fact that rational and informed approach to human rights should serve as a reference point for legislative and executive bodies. Selected individual applications to the European Court of Human Rights are presented, focusing on those based on which standards regarding protection of human rights in the face of pathological social phenomena, terrorism in particular, could be reconstructed and refined. Strasbourg standards may prove helpful in selecting and constructing new legal and legislative solutions, unifying and correlating prophylactic and preventive actions.
Resumo:
Network Intrusion Detection Systems (NIDS) are computer systems which monitor a network with the aim of discerning malicious from benign activity on that network. While a wide range of approaches have met varying levels of success, most IDSs rely on having access to a database of known attack signatures which are written by security experts. Nowadays, in order to solve problems with false positive alerts, correlation algorithms are used to add additional structure to sequences of IDS alerts. However, such techniques are of no help in discovering novel attacks or variations of known attacks, something the human immune system (HIS) is capable of doing in its own specialised domain. This paper presents a novel immune algorithm for application to the IDS problem. The goal is to discover packets containing novel variations of attacks covered by an existing signature base.
Resumo:
Network intrusion detection systems are themselves becoming targets of attackers. Alert flood attacks may be used to conceal malicious activity by hiding it among a deluge of false alerts sent by the attacker. Although these types of attacks are very hard to stop completely, our aim is to present techniques that improve alert throughput and capacity to such an extent that the resources required to successfully mount the attack become prohibitive. The key idea presented is to combine a token bucket filter with a realtime correlation algorithm. The proposed algorithm throttles alert output from the IDS when an attack is detected. The attack graph used in the correlation algorithm is used to make sure that alerts crucial to forming strategies are not discarded by throttling.
Resumo:
Database schemas, in many organizations, are considered one of the critical assets to be protected. From database schemas, it is not only possible to infer the information being collected but also the way organizations manage their businesses and/or activities. One of the ways to disclose database schemas is through the Create, Read, Update and Delete (CRUD) expressions. In fact, their use can follow strict security rules or be unregulated by malicious users. In the first case, users are required to master database schemas. This can be critical when applications that access the database directly, which we call database interface applications (DIA), are developed by third party organizations via outsourcing. In the second case, users can disclose partially or totally database schemas following malicious algorithms based on CRUD expressions. To overcome this vulnerability, we propose a new technique where CRUD expressions cannot be directly manipulated by DIAs any more. Whenever a DIA starts-up, the associated database server generates a random codified token for each CRUD expression and sends it to the DIA that the database servers can use to execute the correspondent CRUD expression. In order to validate our proposal, we present a conceptual architectural model and a proof of concept.