945 resultados para IP spoofing
Resumo:
In this paper, we propose and evaluate an implementation of a prototype scalable web server. The prototype consists of a load-balanced cluster of hosts that collectively accept and service TCP connections. The host IP addresses are advertised using the Round Robin DNS technique, allowing any host to receive requests from any client. Once a client attempts to establish a TCP connection with one of the hosts, a decision is made as to whether or not the connection should be redirected to a different host---namely, the host with the lowest number of established connections. We use the low-overhead Distributed Packet Rewriting (DPR) technique to redirect TCP connections. In our prototype, each host keeps information about connections in hash tables and linked lists. Every time a packet arrives, it is examined to see if it has to be redirected or not. Load information is maintained using periodic broadcasts amongst the cluster hosts.
Resumo:
Current Internet transport protocols make end-to-end measurements and maintain per-connection state to regulate the use of shared network resources. When two or more such connections share a common endpoint, there is an opportunity to correlate the end-to-end measurements made by these protocols to better diagnose and control the use of shared resources. We develop packet probing techniques to determine whether a pair of connections experience shared congestion. Correct, efficient diagnoses could enable new techniques for aggregate congestion control, QoS admission control, connection scheduling and mirror site selection. Our extensive simulation results demonstrate that the conditional (Bayesian) probing approach we employ provides superior accuracy, converges faster, and tolerates a wider range of network conditions than recently proposed memoryless (Markovian) probing approaches for addressing this opportunity.
Resumo:
The cost and complexity of deploying measurement infrastructure in the Internet for the purpose of analyzing its structure and behavior is considerable. Basic questions about the utility of increasing the number of measurements and/or measurement sites have not yet been addressed which has lead to a "more is better" approach to wide-area measurements. In this paper, we quantify the marginal utility of performing wide-area measurements in the context of Internet topology discovery. We characterize topology in terms of nodes, links, node degree distribution, and end-to-end flows using statistical and information-theoretic techniques. We classify nodes discovered on the routes between a set of 8 sources and 1277 destinations to differentiate nodes which make up the so called "backbone" from those which border the backbone and those on links between the border nodes and destination nodes. This process includes reducing nodes that advertise multiple interfaces to single IP addresses. We show that the utility of adding sources goes down significantly after 2 from the perspective of interface, node, link and node degree discovery. We show that the utility of adding destinations is constant for interfaces, nodes, links and node degree indicating that it is more important to add destinations than sources. Finally, we analyze paths through the backbone and show that shared link distributions approximate a power law indicating that a small number of backbone links in our study are very heavily utilized.
Resumo:
The increasing practicality of large-scale flow capture makes it possible to conceive of traffic analysis methods that detect and identify a large and diverse set of anomalies. However the challenge of effectively analyzing this massive data source for anomaly diagnosis is as yet unmet. We argue that the distributions of packet features (IP addresses and ports) observed in flow traces reveals both the presence and the structure of a wide range of anomalies. Using entropy as a summarization tool, we show that the analysis of feature distributions leads to significant advances on two fronts: (1) it enables highly sensitive detection of a wide range of anomalies, augmenting detections by volume-based methods, and (2) it enables automatic classification of anomalies via unsupervised learning. We show that using feature distributions, anomalies naturally fall into distinct and meaningful clusters. These clusters can be used to automatically classify anomalies and to uncover new anomaly types. We validate our claims on data from two backbone networks (Abilene and Geant) and conclude that feature distributions show promise as a key element of a fairly general network anomaly diagnosis framework.
Resumo:
Overlay networks have become popular in recent times for content distribution and end-system multicasting of media streams. In the latter case, the motivation is based on the lack of widespread deployment of IP multicast and the ability to perform end-host processing. However, constructing routes between various end-hosts, so that data can be streamed from content publishers to many thousands of subscribers, each having their own QoS constraints, is still a challenging problem. First, any routes between end-hosts using trees built on top of overlay networks can increase stress on the underlying physical network, due to multiple instances of the same data traversing a given physical link. Second, because overlay routes between end-hosts may traverse physical network links more than once, they increase the end-to-end latency compared to IP-level routing. Third, algorithms for constructing efficient, large-scale trees that reduce link stress and latency are typically more complex. This paper therefore compares various methods to construct multicast trees between end-systems, that vary in terms of implementation costs and their ability to support per-subscriber QoS constraints. We describe several algorithms that make trade-offs between algorithmic complexity, physical link stress and latency. While no algorithm is best in all three cases we show how it is possible to efficiently build trees for several thousand subscribers with latencies within a factor of two of the optimal, and link stresses comparable to, or better than, existing technologies.
Resumo:
A common assumption made in traffic matrix (TM) modeling and estimation is independence of a packet's network ingress and egress. We argue that in real IP networks, this assumption should not and does not hold. The fact that most traffic consists of two-way exchanges of packets means that traffic streams flowing in opposite directions at any point in the network are not independent. In this paper we propose a model for traffic matrices based on independence of connections rather than packets. We argue that the independent connection (IC) model is more intuitive, and has a more direct connection to underlying network phenomena than the gravity model. To validate the IC model, we show that it fits real data better than the gravity model and that it works well as a prior in the TM estimation problem. We study the model's parameters empirically and identify useful stability properties. This justifies the use of the simpler versions of the model for TM applications. To illustrate the utility of the model we focus on two such applications: synthetic TM generation and TM estimation. To the best of our knowledge this is the first traffic matrix model that incorporates properties of bidirectional traffic.
Resumo:
Thin film dielectrics based on titanium, zirconium or hafnium oxides are being introduced to increase the permittivity of insulating layers in transistors for micro/nanoelectronics and memory devices. Atomic layer deposition (ALD) is the process of choice for fabricating these films, as it allows for high control of composition and thickness in thin, conformal films which can be deposited on substrates with high aspect-ratio features. The success of this method depends crucially on the chemical properties of the precursor molecules. A successful ALD precursor should be volatile, stable in the gas-phase, but reactive on the substrate and growing surface, leading to inert by-products. In recent years, many different ALD precursors for metal oxides have been developed, but many of them suffer from low thermal stability. Much promise is shown by group 4 metal precursors that contain cyclopentadienyl (Cp = C5H5-xRx) ligands. One of the main advantages of Cp precursors is their thermal stability. In this work ab initio calculations were carried out at the level of density functional theory (DFT) on a range of heteroleptic metallocenes [M(Cp)4-n(L)n], M = Hf/Zr/Ti, L = Me and OMe, in order to find mechanistic reasons for their observed behaviour during ALD. Based on optimized monomer structures, reactivity is analyzed with respect to ligand elimination. The order in which different ligands are eliminated during ALD follows their energetics which was in agreement with experimental measurements. Titanocene-derived precursors, TiCp*(OMe)3, do not yield TiO2 films in atomic layer deposition (ALD) with water, while Ti(OMe)4 does. DFT was used to model the ALD reaction sequence and find the reason for the difference in growth behaviour. Both precursors adsorb initially via hydrogen-bonding. The simulations reveal that the Cp* ligand of TiCp*(OMe)3 lowers the Lewis acidity of the Ti centre and prevents its coordination to surface O (densification) during both of the ALD pulses. Blocking this step hindered further ALD reactions and for that reason no ALD growth is observed from TiCp*(OMe)3 and water. The thermal stability in the gas phase of Ti, Zr and Hf precursors that contain cyclopentadienyl ligands was also considered. The reaction that was found using DFT is an intramolecular α-H transfer that produces an alkylidene complex. The analysis shows that thermal stabilities of complexes of the type MCp2(CH3)2 increase down group 4 (M = Ti, Zr and Hf) due to an increase in the HOMO-LUMO band gap of the reactants, which itself increases with the electrophilicity of the metal. The reverse reaction of α-hydrogen abstraction in ZrCp2Me2 is 1,2-addition reaction of a C-H bond to a Zr=C bond. The same mechanism is investigated to determine if it operates for 1,2 addition of the tBu C-H across Hf=N in a corresponding Hf dimer complex. The aim of this work is to understand orbital interactions, how bonds break and how new bonds form, and in what state hydrogen is transferred during the reaction. Calculations reveal two synchronous and concerted electron transfers within a four-membered cyclic transition state in the plane between the cyclopentadienyl rings, one π(M=X)-to-σ(M-C) involving metal d orbitals and the other σ(C-H)-to-σ(X-H) mediating the transfer of neutral H, where X = C or N. The reaction of the hafnium dimer complex with CO that was studied for the purpose of understanding C-H bond activation has another interesting application, namely the cleavage of an N-N bond and resulting N-C bond formation. Analysis of the orbital plots reveals repulsion between the occupied orbitals on CO and the N-N unit where CO approaches along the N-N axis. The repulsions along the N-N axis are minimized by instead forming an asymmetrical intermediate in which CO first coordinates to one Hf and then to N. This breaks the symmetry of the N-N unit and the resultant mixing of MOs allows σ(NN) to be polarized, localizing electrons on the more distant N. This allowed σ(CO) and π(CO) donation to N and back-donation of π*(Hf2N2) to CO. Improved understanding of the chemistry of metal complexes can be gained from atomic-scale modelling and this provides valuable information for the design of new ALD precursors. The information gained from the model decomposition pathway can be additionally used to understand the chemistry of molecules in the ALD process as well as in catalytic systems.
Resumo:
This thesis critically investigates the divergent international approaches to the legal regulation of the patentability of computer software inventions, with a view to identifying the reforms necessary for a certain, predictable and uniform inter-jurisdictional system of protection. Through a critical analysis of the traditional and contemporary US and European regulatory frameworks of protection for computer software inventions, this thesis demonstrates the confusion and legal uncertainty resulting from ill-defined patent laws and inconsistent patent practices as to the scope of the “patentable subject matter” requirement, further compounded by substantial flaws in the structural configuration of the decision-making procedures within which the patent systems operate. This damaging combination prevents the operation of an accessible and effective Intellectual Property (IP) legal framework of protection for computer software inventions, capable of securing adequate economic returns for inventors whilst preserving the necessary scope for innovation and competition in the field, to the ultimate benefit of society. In exploring the substantive and structural deficiencies in the European and US regulatory frameworks, this thesis develops to ultimately highlight that the best approach to the reform of the legal regulation of software patentability is two-tiered. It demonstrates that any reform to achieve international legal harmony first requires the legislature to individually clarify (Europe) or restate (US) the long-standing inadequate rules governing the scope of software “patentable subject matter”, together with the reorganisation of the unworkable structural configuration of the decision-making procedures. Informed by the critical analysis of the evolution of the “patentable subject matter” requirement for computer software in the US, this thesis particularly considers the potential of the reforms of the European patent system currently underway, to bring about certainty, predictability and uniformity in the legal treatment of computer software inventions.
Resumo:
Electron microscopy (EM) has advanced in an exponential way since the first transmission electron microscope (TEM) was built in the 1930’s. The urge to ‘see’ things is an essential part of human nature (talk of ‘seeing is believing’) and apart from scanning tunnel microscopes which give information about the surface, EM is the only imaging technology capable of really visualising atomic structures in depth down to single atoms. With the development of nanotechnology the demand to image and analyse small things has become even greater and electron microscopes have found their way from highly delicate and sophisticated research grade instruments to key-turn and even bench-top instruments for everyday use in every materials research lab on the planet. The semiconductor industry is as dependent on the use of EM as life sciences and pharmaceutical industry. With this generalisation of use for imaging, the need to deploy advanced uses of EM has become more and more apparent. The combination of several coinciding beams (electron, ion and even light) to create DualBeam or TripleBeam instruments for instance enhances the usefulness from pure imaging to manipulating on the nanoscale. And when it comes to the analytic power of EM with the many ways the highly energetic electrons and ions interact with the matter in the specimen there is a plethora of niches which evolved during the last two decades, specialising in every kind of analysis that can be thought of and combined with EM. In the course of this study the emphasis was placed on the application of these advanced analytical EM techniques in the context of multiscale and multimodal microscopy – multiscale meaning across length scales from micrometres or larger to nanometres, multimodal meaning numerous techniques applied to the same sample volume in a correlative manner. In order to demonstrate the breadth and potential of the multiscale and multimodal concept an integration of it was attempted in two areas: I) Biocompatible materials using polycrystalline stainless steel and II) Semiconductors using thin multiferroic films. I) The motivation to use stainless steel (316L medical grade) comes from the potential modulation of endothelial cell growth which can have a big impact on the improvement of cardio-vascular stents – which are mainly made of 316L – through nano-texturing of the stent surface by focused ion beam (FIB) lithography. Patterning with FIB has never been reported before in connection with stents and cell growth and in order to gain a better understanding of the beam-substrate interaction during patterning a correlative microscopy approach was used to illuminate the patterning process from many possible angles. Electron backscattering diffraction (EBSD) was used to analyse the crystallographic structure, FIB was used for the patterning and simultaneously visualising the crystal structure as part of the monitoring process, scanning electron microscopy (SEM) and atomic force microscopy (AFM) were employed to analyse the topography and the final step being 3D visualisation through serial FIB/SEM sectioning. II) The motivation for the use of thin multiferroic films stems from the ever-growing demand for increased data storage at lesser and lesser energy consumption. The Aurivillius phase material used in this study has a high potential in this area. Yet it is necessary to show clearly that the film is really multiferroic and no second phase inclusions are present even at very low concentrations – ~0.1vol% could already be problematic. Thus, in this study a technique was developed to analyse ultra-low density inclusions in thin multiferroic films down to concentrations of 0.01%. The goal achieved was a complete structural and compositional analysis of the films which required identification of second phase inclusions (through elemental analysis EDX(Energy Dispersive X-ray)), localise them (employing 72 hour EDX mapping in the SEM), isolate them for the TEM (using FIB) and give an upper confidence limit of 99.5% to the influence of the inclusions on the magnetic behaviour of the main phase (statistical analysis).
Resumo:
Background/Aim: It has been demonstrated that a number of pathologies occur as a result of dysregulation of the immune system. Whilst classically associated with apoptosis, the Fas (CD95) signalling pathway plays a role in inflammation. Studies have demonstrated that Fas activation augments TLR4-mediated MyD88-dependent cytokine production. Studies have also shown that the Fas adapter protein FADD is required for RIG-I-induced IFNβ production. As a similar signalling pathway exists between RIG-I, TLR3 and the MyD88- independent of TLR4, we hypothesised that Fas activation may modulate both TLR3- and TLR4-induced cytokine production. Results: Fas activation reduced poly I:C-induced IFNβ, IL-8, IL-10 and TNFα production whilst augmenting poly I:C-, poly A:U- and Sendai virus-induced IP-10 production. TLR3-, RIG-I- and MDA5-induced IP-10 luciferase activation were inhibited by the Fas adapter protein FADD using overexpression studies. Poly I:C-induced phosphorylation of p-38 and JNK MAPK were reduced by Fas activation. Overexpression of FADD induced AP-1 luciferase activation. Point mutations in the AP-1 binding site enhanced poly I:C-induced IP- 10 production. LPS-induced IL-10, IL-12, IL-8 and TNFα production were enhanced by Fas activation, whilst reducing LPS-induced IFNβ production. Absence of FADD using FADD-/- MEFs resulted in impaired IFNβ production. Overexpression studies using FADD augmented TLR4-, MyD88- and TRIF-induced IFNβ luciferase activation. Overexpression studies also suggested that enhanced TLR4-induced IFNβ production was independent of NFκB activation. Conclusion: Viral-induced IP-10 production is augmented by Fas activation by reducing the phosphorylation of p-38 and JNK MAPKs, modulating AP-1 activation. The Fas adapterprotein FADD is required for TLR4-induced IFNβ production. Studies presented here demonstrate that the Fas signalling pathway can therefore modulate the immune response. Our data demonstrates that this modulatory effect is mediated by its adapter protein FADD, tailoring the immune response by acting as a molecular switch. This ensures the appropriate immune response is mounted, thus preventing an exacerbated immune response.
Resumo:
Many commentators explain recent transatlantic rifts by pointing to diverging norms, interests and geopolitical preferences. This paper proceeds from the premise that not all situations of conflict are necessarily due to underlying deadlocked preferences. Rather, non-cooperation may be a strategic form of soft balancing. That is, more generally, if they believe that they are being shortchanged in terms of influence and payoffs, weaker states may deliberately reject possible cooperation in the short run to improve their influence vis-à-vis stronger states in the long run. This need not be due to traditional relative gains concern. States merely calculate that their reputation as a weak negotiator will erode future bargaining power and subsequently their future share of absolute gains. Strategic non-cooperation is therefore a rational signal of resolve. This paper develops the concept of strategic non-cooperation as a soft balancing tool and applies it to the Iraq case in 2002-2003. © 2005 Palgrave Macmillan Ltd.
Resumo:
Ataxia telangiectasia mutant (ATM) is an S/T-Q-directed kinase that is critical for the cellular response to double-stranded breaks (DSBs) in DNA. Following DNA damage, ATM is activated and recruited by the MRN protein complex [meiotic recombination 11 (Mre11)/DNA repair protein Rad50/Nijmegen breakage syndrome 1 proteins] to sites of DNA damage where ATM phosphorylates multiple substrates to trigger cell-cycle arrest. In cancer cells, this regulation may be faulty, and cell division may proceed even in the presence of damaged DNA. We show here that the ribosomal s6 kinase (Rsk), often elevated in cancers, can suppress DSB-induced ATM activation in both Xenopus egg extracts and human tumor cell lines. In analyzing each step in ATM activation, we have found that Rsk targets loading of MRN complex components onto DNA at DSB sites. Rsk can phosphorylate the Mre11 protein directly at S676 both in vitro and in intact cells and thereby can inhibit the binding of Mre11 to DNA with DSBs. Accordingly, mutation of S676 to Ala can reverse inhibition of the response to DSBs by Rsk. Collectively, these data point to Mre11 as an important locus of Rsk-mediated checkpoint inhibition acting upstream of ATM activation.
Resumo:
Las evaluaciones genéticas para caracteres funcionales requieren metodologías como el análisis de supervivencia (SA), que contemplen registros pertenecientes a hembras que no han presentado el evento de interes (descarte o prenez) al momento de la evaluacion. Así, los objetivos de esta tesis fueron: 1) analizar los factores que afectan el riesgo de descarte en vida productiva (VP), y de concepción en días abiertos (DA) e intervalo primer-último servicio (IP), 2) estimar parámetros genéticos de dispersión para los caracteres evaluados. Se emplearon 44652 registros de lactancia de 15706 vacas Holstein Colombiano, ajustando un modelo animal frágil Weibull para VP, con el número de partos (NP), producción de leche (PL) y edad al primer parto (EPP) como efectos fijos. En el caso de DA e IPU se emplearon 14789 servicios de 6205 vacas, ajustando un modelo de datos agrupados con efectos fijos de NP, EPP y número de servicios (SC). En todos los casos se incluyeron efectos aleatorios de animal y de hato. Tanto PL como NP tuvieron gran influencia en la VP, al igual que NP en DA e IPU. Se estimaron valores de h2 de 0,104, 0,086 y 0,1013 para VP, DA, e IPU, respectivamente, asi como correlaciones genéticas simples (rs) entre el nivel de PL con VP y DA de ƒ{0,03 y 0,47, respectivamente. Teniendo en cuenta tanto los valores de cría expresados como riesgo relativo, asi como la magnitud de rs, un aumento en PL conllevaría a disminución en el riesgo de descarte y aumento en el riesgo de concepción, dando lugar a aumentos en la VP y reducción en DA. Las h2s estimadas sugieren la existencia de variabilidad genetica para VP, DA e IPU en Holstein Colombiano, sirviendo de sustento para la implementación de evaluaciones genética de la raza, aprovechando las ventajas de metodologías como SA.
Resumo:
Los tambos son sistemas productivos muy complejos que dependen básicamente de cuatro pilares considerados como variables: el ambiente, los animales, la alimentación y los operarios. Comederos sin mantenimiento, con barro y estiércol, animales con pobre estado corporal, dietas no balanceadas o deficientes en nutrientes, predisponen a trastornos metabólicos. Personal sin capacitación y/o motivación, no realizará bien las múltiples tareas del tambo. Las vacas lecheras antes y después del parto tienen enormes cambios metabólicos y hormonales y esto es un factor que predispone a las enfermedades peripartales (IP) como: distocias, retención de placenta, edema de ubre, metritis, mastitis, cetosis, acidosis, desplazamiento de abomaso, timpanismo, hipocalcemia puerperal, lesiones podales, etc. ¿Como evaluar las variables?¿que miramos o qué metodología seguimos a campo? Para ello se propone el Índice Predictivo de Enfermedades Peripartales (IPEP). El IPEP es una guía que permite realizar un relevamiento ordenado y sistemático de los indicadores relacionados con las enfermedades peripartales sobre las variables del ambiente, los animales, la alimentación y los operarios. Es una herramienta metodológica que permite generar un diagnóstico a campo de las causas y factores predisponentes de las EP en establecimientos lecheros. A cada indicador se le asigna un número en una escala de 1 a 5, donde 1 es malo/pésimo; 2 es regular; 3 es bueno; 4 muy bueno y 5 es óptimo/excelente y los promedios de las variables da un IPEP final. El IPEP se aplicó en un tambo comercial, localizado en Brinkmann-Córdoba de 270 has, 210 vacas y 4200 litros de leche diarios. Se realizaron 4 visitas, se llenaron las planillas de indicadores del IPEP. Se observó el preparto, el rodeo en producción, las pasturas, las instalaciones y se habló con el personal. Se detectaron áreas en conflicto: dieta no balanceada, deficiente suplementación mineral, animales con pelaje seco, accesos en malas condiciones y la falta de una pileta pediluvio. En la primer visita abril de 2008 se realizó el IPEP y dio un valor de 3,11 , superior a 3 lo cual admite la calificación como tambo bueno. De acuerdo a la incidencia de EP y mortandad las pérdidas económicas para dicho IPEP fue de U$S 11.415, que equivalen al 7,3 por ciento de los ingresos por leche. Se propusieron correcciones y mejoró el índice, en septiembre del 2009, dio como resultado un IPEP de 3,71 y las perdidas económicas por EP y mortandad se estimaron en U$S 8.064, equivalentes al 0,43 de los ingresos por leche.
Resumo:
La Pampa Arenosa ha sido escenario de cambios en el uso del territorio que respondieron principalmente al aumento de las precipitaciones a partir de la década del 70. La evaluación de las tierras es una etapa crítica en la planificación del uso sustentable. Por este motivo, se analizaron distintos sistemas de evaluación de tierras y se desarrollaron modelos expertos que consideren los factores ambientales heredados y las variaciones climáticas, para el sector de dunas longitudinales de la Pampa Arenosa en la Provincia de Buenos Aires, a escala 1: 50.000, considerando a los partidos de Nueve de Julio, Carlos Casares, Pehuajó y Trenque Lauquen. Las tierras fueron clasificadas por Capacidad de Uso, Indice de Productividad (IP) y se generaron sistemas expertos, utilizando el programa ALES, para los tipos de utilización de las tierras (TUTs) : maíz, soja y trigo. La homogeneidad de las series climáticas de precipitaciones se determinó mediante el test de Rachas. La aplicación del test de Pettitt permitió identificar la existencia de un cambio abrupto en las precipitaciones y el este de Mann Kendall mostró una tendencia creciente en relación a la precipitación anual. Las tierras con severas (clase III) y muy severas limitaciones (clase IV), fueron las más frecuentes ocupando el 42,6 por ciento y 29,8 por ciento respectivamente del área. Se comprobó que el IP de las tierras se incrementó con el aumento de las precipitaciones, alcanzando su máxima expresión climática en el período posterior al cambio abrupto. Las tierras de moderada capacidad productiva con valores de IP entre 65-51 ocuparon la mayor superficie de área de estudio. Los modelos expertos según los TUTs presentaron una aptitud de uso de las tierras variable, condicionada por la capacidad de retención hídrica de los suelos. Los modelos expertos fueron sensibles a las variaciones climáticas y el cambio abrupto en las precipitaciones.