990 resultados para Internet-wide Scanning


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Industrial control systems (ICS) have been moving from dedicated communications to switched and routed corporate networks, making it probable that these devices are being exposed to the Internet. Many ICS have been designed with poor or little security features, making them vulnerable to potential attack. Recently, several tools have been developed that can scan the internet, including ZMap, Masscan and Shodan. However, little in-depth analysis has been done to compare these Internet-wide scanning techniques, and few Internet-wide scans have been conducted targeting ICS and protocols. In this paper we present a Taxonomy of Internet-wide scanning with a comparison of three popular network scanning tools, and a framework for conducting Internet-wide scans.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A major challenge in neuroscience is finding which genes affect brain integrity, connectivity, and intellectual function. Discovering influential genes holds vast promise for neuroscience, but typical genome-wide searches assess approximately one million genetic variants one-by-one, leading to intractable false positive rates, even with vast samples of subjects. Even more intractable is the question of which genes interact and how they work together to affect brain connectivity. Here, we report a novel approach that discovers which genes contribute to brain wiring and fiber integrity at all pairs of points in a brain scan. We studied genetic correlations between thousands of points in human brain images from 472 twins and their nontwin siblings (mean age: 23.7 2.1 SD years; 193 male/279 female).Wecombined clustering with genome-wide scanning to find brain systems withcommongenetic determination.Wethen filtered the image in a new way to boost power to find causal genes. Using network analysis, we found a network of genes that affect brain wiring in healthy young adults. Our new strategy makes it computationally more tractable to discover genes that affect brain integrity. The gene network showed small-world and scale-free topologies, suggesting efficiency in genetic interactions and resilience to network disruption. Genetic variants at hubs of the network influence intellectual performance by modulating associations between performance intelligence quotient and the integrity of major white matter tracts, such as the callosal genu and splenium, cingulum, optic radiations, and the superior longitudinal fasciculus.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The publish/subscribe paradigm has lately received much attention. In publish/subscribe systems, a specialized event-based middleware delivers notifications of events created by producers (publishers) to consumers (subscribers) interested in that particular event. It is considered a good approach for implementing Internet-wide distributed systems as it provides full decoupling of the communicating parties in time, space and synchronization. One flavor of the paradigm is content-based publish/subscribe which allows the subscribers to express their interests very accurately. In order to implement a content-based publish/subscribe middleware in way suitable for Internet scale, its underlying architecture must be organized as a peer-to-peer network of content-based routers that take care of forwarding the event notifications to all interested subscribers. A communication infrastructure that provides such service is called a content-based network. A content-based network is an application-level overlay network. Unfortunately, the expressiveness of the content-based interaction scheme comes with a price - compiling and maintaining the content-based forwarding and routing tables is very expensive when the amount of nodes in the network is large. The routing tables are usually partially-ordered set (poset) -based data structures. In this work, we present an algorithm that aims to improve scalability in content-based networks by reducing the workload of content-based routers by offloading some of their content routing cost to clients. We also provide experimental results of the performance of the algorithm. Additionally, we give an introduction to the publish/subscribe paradigm and content-based networking and discuss alternative ways of improving scalability in content-based networks. ACM Computing Classification System (CCS): C.2.4 [Computer-Communication Networks]: Distributed Systems - Distributed applications

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A method for preparing nanoelectrode ensembles based on semi-interpenetrating network (SIN) of multi-walled carbon nanotubes (MWNTs) on gold electrode through phase-separation method is initially proposed. Individual nanoelectrode owns irregular three-dimensional MWNTs networks, which is denoted as SIN-MWNTs. On the as-prepared SIN-MWNTs nanoelectrode ensembles, the assembled MWNTs clusters in nanoscale serve as individual nanoelectrode and the electroinactive lipid networks located on the top of alkanethiol monolayer are used as a shielding layer. Cyclic voltammetry (CV), electrochemical impedance spectroscopy (EIS), tapping-mode atomic force microscopy (TM-AFM) and scanning electron microscopy (SEM) were used to characterize the as-prepared SIN-MWNT nanoelectrode ensembles. Experimental results indicate that the well-defined nanoelectrode ensembles were prepared through self-assembly technology. Meantime, sigmoid curves in a wide scanning range can be obtained in CV experiments. This study may pave the way for the construction of truly nanoscopic nanoelectrode arrays by bottom-up strategy.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Understanding and modeling the factors that underlie the growth and evolution of network topologies are basic questions that impact capacity planning, forecasting, and protocol research. Early topology generation work focused on generating network-wide connectivity maps, either at the AS-level or the router-level, typically with an eye towards reproducing abstract properties of observed topologies. But recently, advocates of an alternative "first-principles" approach question the feasibility of realizing representative topologies with simple generative models that do not explicitly incorporate real-world constraints, such as the relative costs of router configurations, into the model. Our work synthesizes these two lines by designing a topology generation mechanism that incorporates first-principles constraints. Our goal is more modest than that of constructing an Internet-wide topology: we aim to generate representative topologies for single ISPs. However, our methods also go well beyond previous work, as we annotate these topologies with representative capacity and latency information. Taking only demand for network services over a given region as input, we propose a natural cost model for building and interconnecting PoPs and formulate the resulting optimization problem faced by an ISP. We devise hill-climbing heuristics for this problem and demonstrate that the solutions we obtain are quantitatively similar to those in measured router-level ISP topologies, with respect to both topological properties and fault-tolerance.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Recurrent airway obstruction (RAO), or heaves, is a naturally occurring asthma-like disease that is related to sensitisation and exposure to mouldy hay and has a familial basis with a complex mode of inheritance. A genome-wide scanning approach using two half-sibling families was taken in order to locate the chromosome regions that contribute to the inherited component of this condition in these families. Initially, a panel of 250 microsatellite markers, which were chosen as a well-spaced, polymorphic selection covering the 31 equine autosomes, was used to genotype the two half-sibling families, which comprised in total 239 Warmblood horses. Subsequently, supplementary markers were added for a total of 315 genotyped markers. Each half-sibling family is focused around a severely RAO-affected stallion, and the phenotype of each individual was assessed for RAO and related signs, namely, breathing effort at rest, breathing effort at work, coughing, and nasal discharge, using an owner-based questionnaire. Analysis using a regression method for half-sibling family structures was performed using RAO and each of the composite clinical signs separately; two chromosome regions (on ECA13 and ECA15) showed a genome-wide significant association with RAO at P < 0.05. An additional 11 chromosome regions showed a more modest association. This is the first publication that describes the mapping of genetic loci involved in RAO. Several candidate genes are located in these regions, a number of which are interleukins. These are important signalling molecules that are intricately involved in the control of the immune response and are therefore good positional candidates.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Enabling real end-user development is the next logical stage in the evolution of Internet-wide service-based applications. Successful composite applications rely on heavyweight service orchestration technologies that raise the bar far above end-user skills. This weakness can be attributed to the fact that the composition model does not satisfy end-user needs rather than to the actual infrastructure technologies. In our opinion, the best way to overcome this weakness is to offer end-to-end composition from the user interface to service invocation, plus an understandable abstraction of building blocks and a visual composition technique empowering end users to develop their own applications. In this paper, we present a visual framework for end users, called FAST, which fulfils this objective. FAST implements a novel composition model designed to empower non-programmer end users to create and share their own self-service composite applications in a fully visual fashion. We projected the development environment implementing this model as part of the European FP7 FAST Project, which was used to validate the rationale behind our approach.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Esta tesis se centra en el análisis de dos aspectos complementarios de la ciberdelincuencia (es decir, el crimen perpetrado a través de la red para ganar dinero). Estos dos aspectos son las máquinas infectadas utilizadas para obtener beneficios económicos de la delincuencia a través de diferentes acciones (como por ejemplo, clickfraud, DDoS, correo no deseado) y la infraestructura de servidores utilizados para gestionar estas máquinas (por ejemplo, C & C, servidores explotadores, servidores de monetización, redirectores). En la primera parte se investiga la exposición a las amenazas de los ordenadores victimas. Para realizar este análisis hemos utilizado los metadatos contenidos en WINE-BR conjunto de datos de Symantec. Este conjunto de datos contiene metadatos de instalación de ficheros ejecutables (por ejemplo, hash del fichero, su editor, fecha de instalación, nombre del fichero, la versión del fichero) proveniente de 8,4 millones de usuarios de Windows. Hemos asociado estos metadatos con las vulnerabilidades en el National Vulnerability Database (NVD) y en el Opens Sourced Vulnerability Database (OSVDB) con el fin de realizar un seguimiento de la decadencia de la vulnerabilidad en el tiempo y observar la rapidez de los usuarios a remiendar sus sistemas y, por tanto, su exposición a posibles ataques. Hemos identificado 3 factores que pueden influir en la actividad de parches de ordenadores victimas: código compartido, el tipo de usuario, exploits. Presentamos 2 nuevos ataques contra el código compartido y un análisis de cómo el conocimiento usuarios y la disponibilidad de exploit influyen en la actividad de aplicación de parches. Para las 80 vulnerabilidades en nuestra base de datos que afectan código compartido entre dos aplicaciones, el tiempo entre el parche libera en las diferentes aplicaciones es hasta 118 das (con una mediana de 11 das) En la segunda parte se proponen nuevas técnicas de sondeo activos para detectar y analizar las infraestructuras de servidores maliciosos. Aprovechamos técnicas de sondaje activo, para detectar servidores maliciosos en el internet. Empezamos con el análisis y la detección de operaciones de servidores explotadores. Como una operación identificamos los servidores que son controlados por las mismas personas y, posiblemente, participan en la misma campaña de infección. Hemos analizado un total de 500 servidores explotadores durante un período de 1 año, donde 2/3 de las operaciones tenían un único servidor y 1/2 por varios servidores. Hemos desarrollado la técnica para detectar servidores explotadores a diferentes tipologías de servidores, (por ejemplo, C & C, servidores de monetización, redirectores) y hemos logrado escala de Internet de sondeo para las distintas categorías de servidores maliciosos. Estas nuevas técnicas se han incorporado en una nueva herramienta llamada CyberProbe. Para detectar estos servidores hemos desarrollado una novedosa técnica llamada Adversarial Fingerprint Generation, que es una metodología para generar un modelo único de solicitud-respuesta para identificar la familia de servidores (es decir, el tipo y la operación que el servidor apartenece). A partir de una fichero de malware y un servidor activo de una determinada familia, CyberProbe puede generar un fingerprint válido para detectar todos los servidores vivos de esa familia. Hemos realizado 11 exploraciones en todo el Internet detectando 151 servidores maliciosos, de estos 151 servidores 75% son desconocidos a bases de datos publicas de servidores maliciosos. Otra cuestión que se plantea mientras se hace la detección de servidores maliciosos es que algunos de estos servidores podrán estar ocultos detrás de un proxy inverso silente. Para identificar la prevalencia de esta configuración de red y mejorar el capacidades de CyberProbe hemos desarrollado RevProbe una nueva herramienta a través del aprovechamiento de leakages en la configuración de la Web proxies inversa puede detectar proxies inversos. RevProbe identifica que el 16% de direcciones IP maliciosas activas analizadas corresponden a proxies inversos, que el 92% de ellos son silenciosos en comparación con 55% para los proxies inversos benignos, y que son utilizado principalmente para equilibrio de carga a través de múltiples servidores. ABSTRACT In this dissertation we investigate two fundamental aspects of cybercrime: the infection of machines used to monetize the crime and the malicious server infrastructures that are used to manage the infected machines. In the first part of this dissertation, we analyze how fast software vendors apply patches to secure client applications, identifying shared code as an important factor in patch deployment. Shared code is code present in multiple programs. When a vulnerability affects shared code the usual linear vulnerability life cycle is not anymore effective to describe how the patch deployment takes place. In this work we show which are the consequences of shared code vulnerabilities and we demonstrate two novel attacks that can be used to exploit this condition. In the second part of this dissertation we analyze malicious server infrastructures, our contributions are: a technique to cluster exploit server operations, a tool named CyberProbe to perform large scale detection of different malicious servers categories, and RevProbe a tool that detects silent reverse proxies. We start by identifying exploit server operations, that are, exploit servers managed by the same people. We investigate a total of 500 exploit servers over a period of more 13 months. We have collected malware from these servers and all the metadata related to the communication with the servers. Thanks to this metadata we have extracted different features to group together servers managed by the same entity (i.e., exploit server operation), we have discovered that 2/3 of the operations have a single server while 1/3 have multiple servers. Next, we present CyberProbe a tool that detects different malicious server types through a novel technique called adversarial fingerprint generation (AFG). The idea behind CyberProbe’s AFG is to run some piece of malware and observe its network communication towards malicious servers. Then it replays this communication to the malicious server and outputs a fingerprint (i.e. a port selection function, a probe generation function and a signature generation function). Once the fingerprint is generated CyberProbe scans the Internet with the fingerprint and finds all the servers of a given family. We have performed a total of 11 Internet wide scans finding 151 new servers starting with 15 seed servers. This gives to CyberProbe a 10 times amplification factor. Moreover we have compared CyberProbe with existing blacklists on the internet finding that only 40% of the server detected by CyberProbe were listed. To enhance the capabilities of CyberProbe we have developed RevProbe, a reverse proxy detection tool that can be integrated with CyberProbe to allow precise detection of silent reverse proxies used to hide malicious servers. RevProbe leverages leakage based detection techniques to detect if a malicious server is hidden behind a silent reverse proxy and the infrastructure of servers behind it. At the core of RevProbe is the analysis of differences in the traffic by interacting with a remote server.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Editor: Nov. 1873-Aug. 1905, Mary M. Dodge.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The structural evolution of an ice-quenched high-density polyethylene (HDPE) subjected to uniaxial tensile deformation at elevated temperatures was examined as a function of the imposed strains by means of combined synchrotron small-angle X-ray scattering (SAXS) and wide-angle X-ray scattering (WAXS) techniques. The data show that when stretching an isotropic sample with the spherulitic structure, intralamellar slipping of crystalline blocks was activated at small deformations, followed by a stress-induced fragmentation and recrystallization process yielding lamellar crystallites with their normal parallel to the stretching direction. Stretching of an isothermally crystallized HDPE sample at 120 degrees C exhibited changes of the SAXS diagram with strain similar to that observed for quenched HDPE elongated at room temperature, implying that the thermal stability of the crystal blocks composing the lamellae is only dependent on the crystallization temperature.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

A dual-reflector antenna composed by a small reconfigurable reflectarray subreflector and a large parabolic main reflector is proposed for beam scanning application in the 120 GHz frequency band. The beam scanning is achieved by changing the phase distribution on the reflectarray surface which is supposed to contain reconfigurable cells. The phase distribution for the different beam deflecting states is obtained with a synthesis technique based on the analysis of the antenna in receive mode.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper proposes a hybrid scanning antenna architecture for applications in mm-wave intelligent mobile sensing and communications. We experimentally demonstrate suitable W-band leaky-wave antenna prototypes in substrate integrated waveguide (SIW) technology. Three SIW antennas have been designed that within a 6.5 % fractional bandwidth provide beam scanning over three adjacent angular sectors. Prototypes have been fabricated and their performance has been experimentally evaluated. The measured radiation patterns have shown three frequency scanning beams covering angles from 11 to 56 degrees with beamwidth of 10?±?3 degrees within the 88-94 GHz frequency range.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

ACM Computing Classification System (1998): J.2.