909 resultados para privilege escalation attack
Resumo:
Research on corrosion of steel structures in various marine environments is essential to assure the safety of structures and can effectively prolong their service life. In order to provide data for anticorrosion design of oil exploitation structures in the Bohai Bay, the corrosion behaviour and properties of steel in beach soil, using typical steel samples (Q235A carbon steel and API 5Lx52 pipeline steel) buried 0.5, 1.0 and 1.5 m deep under typical beach soils in Tanggu, Yangjiaogou, Xingcheng, Yingkou and Chengdao for 1-2 years were studied. The carbon steel and pipeline steel were both corroded severely in the beach soil, with the form of corrosion being mainly uniform with some localised attack (pitting corrosion). The corrosion rate of the carbon steel was up to 0.16 mm/year with a maximum penetration depth of 0.76 mm and that of the pipeline steel was up to 0.14 mm/year, with a maximum penetration depth of 0.53 mm. Compared with carbon steel, the pipeline steel generally had better corrosion resistance in most test beach soils. The corrosion rates and the maximum corrosion depths of carbon steel and pipeline steel were in the order: Tanggu>Xingcheng>Chengdao>Yingkou>Yangjiaogou with corrosion altering with depth of burial. The corrosion of steel in the beach soil involves a mixed mechanism with different degrees of soil aeration and microbial activity present. It is concluded that long term in situ plate laying experiments must be carried out to obtain data on steel corrosion in this beach soil environment so that the effective protection measures can be implemented.
Resumo:
This dissertation that includes most of the P. PH.D research work during 2001~2002 covers the large-scale distribution of continental earthquakes in mainland China, the mechanism and statistic features of grouped strong earthquakes related to the tidal triggering, some results in earthquake prediction with correlativity analysis methods, and the flushes from the two strong continental earthquakes in South Asia in 2001. Mainland China is the only continental sub-plate that is compressed by collision boundaries at its two sides, within which earthquakes are dispersive and distributed as seismic belts with different widths. The control capability of the continental block boundaries on the strong earthquakes and seismic hazards is calculated and analyzed in this dissertation. By mapping the distribution of the 31282 ML:3s2,0 earthquakes, I found that the depth of continental earthquakes depend on the tectonic zonings. The events on the boundaries of relatively integrated blocks are deep and those on the new-developed ruptures are shallow. The average depth of earthquakes in the West of China is about 5km deeper than that in the east. The western and southwestern brim of Tarim Basin generated the deepest earthquakes in mainland China. The statistic results from correlation between the grouped M7 earthquakes and the tidal stress show that the strong events were modulated by tidal stress in active periods. Taking Taiwan area as an example, the dependence of moderate events on the moon phase angles (£>) is analyzed, which shows that the number of the earthquakes in Taiwan when D is 50° ,50° +90° and 50° +180° is more than 2 times of standard deviation over the average frequency at each degree, corresponding to the 4th, 12th and 19th solar day after the new moon. The probability of earthquake attack to the densely populated Taiwan island on the 4th solar day is about 4 times of that on other solar days. On the practice of earthquake prediction, I calculated and analyzed the temporal correlation of the earthquakes in Xinjinag area, Qinghai-Tibet area, west Yunnan area, North China area and those in their adjacent areas, and predicted at the end of 2000 that it would be a special time interval from 2001 to 2003, within which moderate to strong earthquakes would be more active in the west of China. What happened in 2001 partly validated the prediction. Within 10 months, there were 2 great continental earthquakes in south Asia, i.e., the M7.8 event in India on Jan 26 and M8.1 event in China on Nov. 14, 2001, which are the largest earthquake in the past 50 years both for India and China. No records for two great earthquakes in Asia within so short time interval. We should speculate the following aspects from the two incidences: The influence of the fallacious deployment of seismic stations on the fine location and focal mechanism determination of strong earthquakes must be affronted. It is very important to introduce comparative seismology research to seismic hazard analysis and earthquake prediction research. The improvement or changes in real-time prediction of strong earthquakes with precursors is urged. Methods need to be refreshed to protect environment and historical relics in earthquake-prone areas.
Resumo:
This paper explores the relationships between a computation theory of temporal representation (as developed by James Allen) and a formal linguistic theory of tense (as developed by Norbert Hornstein) and aspect. It aims to provide explicit answers to four fundamental questions: (1) what is the computational justification for the primitive of a linguistic theory; (2) what is the computational explanation of the formal grammatical constraints; (3) what are the processing constraints imposed on the learnability and markedness of these theoretical constructs; and (4) what are the constraints that a linguistic theory imposes on representations. We show that one can effectively exploit the interface between the language faculty and the cognitive faculties by using linguistic constraints to determine restrictions on the cognitive representation and vice versa. Three main results are obtained: (1) We derive an explanation of an observed grammatical constraint on tense?? Linear Order Constraint??m the information monotonicity property of the constraint propagation algorithm of Allen's temporal system: (2) We formulate a principle of markedness for the basic tense structures based on the computational efficiency of the temporal representations; and (3) We show Allen's interval-based temporal system is not arbitrary, but it can be used to explain independently motivated linguistic constraints on tense and aspect interpretations. We also claim that the methodology of research developed in this study??oss-level" investigation of independently motivated formal grammatical theory and computational models??a powerful paradigm with which to attack representational problems in basic cognitive domains, e.g., space, time, causality, etc.
Resumo:
We report a 75dB, 2.8mW, 100Hz-10kHz envelope detector in a 1.5mm 2.8V CMOS technology. The envelope detector performs input-dc-insensitive voltage-to-currentconverting rectification followed by novel nanopower current-mode peak detection. The use of a subthreshold wide- linear-range transconductor (WLR OTA) allows greater than 1.7Vpp input voltage swings. We show theoretically that this optimal performance is technology-independent for the given topology and may be improved only by spending more power. A novel circuit topology is used to perform 140nW peak detection with controllable attack and release time constants. The lower limits of envelope detection are determined by the more dominant of two effects: The first effect is caused by the inability of amplified high-frequency signals to exceed the deadzone created by exponential nonlinearities in the rectifier. The second effect is due to an output current caused by thermal noise rectification. We demonstrate good agreement of experimentally measured results with theory. The envelope detector is useful in low power bionic implants for the deaf, hearing aids, and speech-recognition front ends. Extension of the envelope detector to higher- frequency applications is straightforward if power consumption is inc
Resumo:
PILOT is a programming system constructed in LISP. It is designed to facilitate the development of programs by easing the familiar sequence: write some code, run the program, make some changes, write some more code, run the program again, etc. As a program becomes more complex, making these changes becomes harder and harder because the implications of changes are harder to anticipate. In the PILOT system, the computer plays an active role in this evolutionary process by providing the means whereby changes can be effected immediately, and in ways that seem natural to the user. The user of PILOT feels that he is giving advice, or making suggestions, to the computer about the operation of his programs, and that the system then performs the work necessary. The PILOT system is thus an interface between the user and his program, monitoring both in the requests of the user and operation of his program. The user may easily modify the PILOT system itself by giving it advice about its own operation. This allows him to develop his own language and to shift gradually onto PILOT the burden of performing routine but increasingly complicated tasks. In this way, he can concentrate on the conceptual difficulties in the original problem, rather than on the niggling tasks of editing, rewriting, or adding to his programs. Two detailed examples are presented. PILOT is a first step toward computer systems that will help man to formulate problems in the same way they now help him to solve them. Experience with it supports the claim that such "symbiotic systems" allow the programmer to attack and solve more difficult problems.
Resumo:
McInnes, Colin, Spectator Sport War: The West and Contemporary Conflict (Boulder, CO: Lynne Rienner, 2002) pp.vii+187 RAE2008
Resumo:
Alden, N. L. (2007). Introduction. Critical Quarterly, 49 (2), pp.34-38 RAE2008
Resumo:
Wydział Historyczny: Instytut Historii
Resumo:
Projeto de Pós-Graduação/Dissertação apresentado à Universidade Fernando Pessoa como parte dos requisitos para obtenção do grau de Mestre em Ciências Farmacêuticas
Resumo:
This is a postprint (author's final draft) version of an article published in the journal Social Compass in 2010. The final version of this article may be found at http://dx.doi.org/10.1177/0037768610362406 (login may be required). The version made available in OpenBU was supplied by the author.
Resumo:
In this paper, we expose an unorthodox adversarial attack that exploits the transients of a system's adaptive behavior, as opposed to its limited steady-state capacity. We show that a well orchestrated attack could introduce significant inefficiencies that could potentially deprive a network element from much of its capacity, or significantly reduce its service quality, while evading detection by consuming an unsuspicious, small fraction of that element's hijacked capacity. This type of attack stands in sharp contrast to traditional brute-force, sustained high-rate DoS attacks, as well as recently proposed attacks that exploit specific protocol settings such as TCP timeouts. We exemplify what we term as Reduction of Quality (RoQ) attacks by exposing the vulnerabilities of common adaptation mechanisms. We develop control-theoretic models and associated metrics to quantify these vulnerabilities. We present numerical and simulation results, which we validate with observations from real Internet experiments. Our findings motivate the need for the development of adaptation mechanisms that are resilient to these new forms of attacks.
Resumo:
The exploding demand for services like the World Wide Web reflects the potential that is presented by globally distributed information systems. The number of WWW servers world-wide has doubled every 3 to 5 months since 1993, outstripping even the growth of the Internet. At each of these self-managed sites, the Common Gateway Interface (CGI) and Hypertext Transfer Protocol (HTTP) already constitute a rudimentary basis for contributing local resources to remote collaborations. However, the Web has serious deficiencies that make it unsuited for use as a true medium for metacomputing --- the process of bringing hardware, software, and expertise from many geographically dispersed sources to bear on large scale problems. These deficiencies are, paradoxically, the direct result of the very simple design principles that enabled its exponential growth. There are many symptoms of the problems exhibited by the Web: disk and network resources are consumed extravagantly; information search and discovery are difficult; protocols are aimed at data movement rather than task migration, and ignore the potential for distributing computation. However, all of these can be seen as aspects of a single problem: as a distributed system for metacomputing, the Web offers unpredictable performance and unreliable results. The goal of our project is to use the Web as a medium (within either the global Internet or an enterprise intranet) for metacomputing in a reliable way with performance guarantees. We attack this problem one four levels: (1) Resource Management Services: Globally distributed computing allows novel approaches to the old problems of performance guarantees and reliability. Our first set of ideas involve setting up a family of real-time resource management models organized by the Web Computing Framework with a standard Resource Management Interface (RMI), a Resource Registry, a Task Registry, and resource management protocols to allow resource needs and availability information be collected and disseminated so that a family of algorithms with varying computational precision and accuracy of representations can be chosen to meet realtime and reliability constraints. (2) Middleware Services: Complementary to techniques for allocating and scheduling available resources to serve application needs under realtime and reliability constraints, the second set of ideas aim at reduce communication latency, traffic congestion, server work load, etc. We develop customizable middleware services to exploit application characteristics in traffic analysis to drive new server/browser design strategies (e.g., exploit self-similarity of Web traffic), derive document access patterns via multiserver cooperation, and use them in speculative prefetching, document caching, and aggressive replication to reduce server load and bandwidth requirements. (3) Communication Infrastructure: Finally, to achieve any guarantee of quality of service or performance, one must get at the network layer that can provide the basic guarantees of bandwidth, latency, and reliability. Therefore, the third area is a set of new techniques in network service and protocol designs. (4) Object-Oriented Web Computing Framework A useful resource management system must deal with job priority, fault-tolerance, quality of service, complex resources such as ATM channels, probabilistic models, etc., and models must be tailored to represent the best tradeoff for a particular setting. This requires a family of models, organized within an object-oriented framework, because no one-size-fits-all approach is appropriate. This presents a software engineering challenge requiring integration of solutions at all levels: algorithms, models, protocols, and profiling and monitoring tools. The framework captures the abstract class interfaces of the collection of cooperating components, but allows the concretization of each component to be driven by the requirements of a specific approach and environment.
Resumo:
This paper explores the problem of protecting a site on the Internet against hostile external Java applets while allowing trusted internal applets to run. With careful implementation, a site can be made resistant to current Java security weaknesses as well as those yet to be discovered. In addition, we describe a new attack on certain sophisticated firewalls that is most effectively realized as a Java applet.
Resumo:
As new multi-party edge services are deployed on the Internet, application-layer protocols with complex communication models and event dependencies are increasingly being specified and adopted. To ensure that such protocols (and compositions thereof with existing protocols) do not result in undesirable behaviors (e.g., livelocks) there needs to be a methodology for the automated checking of the "safety" of these protocols. In this paper, we present ingredients of such a methodology. Specifically, we show how SPIN, a tool from the formal systems verification community, can be used to quickly identify problematic behaviors of application-layer protocols with non-trivial communication models—such as HTTP with the addition of the "100 Continue" mechanism. As a case study, we examine several versions of the specification for the Continue mechanism; our experiments mechanically uncovered multi-version interoperability problems, including some which motivated revisions of HTTP/1.1 and some which persist even with the current version of the protocol. One such problem resembles a classic degradation-of-service attack, but can arise between well-meaning peers. We also discuss how the methods we employ can be used to make explicit the requirements for hardening a protocol's implementation against potentially malicious peers, and for verifying an implementation's interoperability with the full range of allowable peer behaviors.
Resumo:
We consider the problem of building robust fuzzy extractors, which allow two parties holding similar random variables W, W' to agree on a secret key R in the presence of an active adversary. Robust fuzzy extractors were defined by Dodis et al. in Crypto 2006 [6] to be noninteractive, i.e., only one message P, which can be modified by an unbounded adversary, can pass from one party to the other. This allows them to be used by a single party at different points in time (e.g., for key recovery or biometric authentication), but also presents an additional challenge: what if R is used, and thus possibly observed by the adversary, before the adversary has a chance to modify P. Fuzzy extractors secure against such a strong attack are called post-application robust. We construct a fuzzy extractor with post-application robustness that extracts a shared secret key of up to (2m−n)/2 bits (depending on error-tolerance and security parameters), where n is the bit-length and m is the entropy of W . The previously best known result, also of Dodis et al., [6] extracted up to (2m − n)/3 bits (depending on the same parameters).