998 resultados para servers


Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper reviews the technical features and requirements of Building Information Modelling (BIM)-servers as collaboration platforms for multi-disciplinary building and construction projects. Multi-disciplinary collaboration is the norm in the Architecture, Engineering, and Construction (AEC) industries, especially in complex projects. The widespread adoption of object-oriented Computer-aided Design (CAD) tools that support BIM capabilities has generated greater interest in model based exchange of information across disciplines and consultants who have traditionally collaborated through the frequent exchange of 2D drawings and documents. BIM-servers are collaboration platforms that are expected to provide the technical capability to support this inter-disciplinary exchange of 3D models in addition to intelligent management of the related drawings, documents and other forms of data. Since BIM-servers are a recent technical development a review of their technical features can help further development. This paper serves this objective by providing a review of the technical features and requirements for using BIM-servers as multi-disciplinary collaboration platforms on building and construction projects. The methodologies include focus group interviews (FIGs) with representatives from the diverse AEC disciplines, a case study on a state-of-the-art BIM-server, and a critical review and analysis of current collaboration platforms that are available to the AEC industries. This paper concludes that greater emphasis should be placed on supporting technical requirements to facilitate technology management and implementation across disciplines. Their implications for user-centric technology development in design and construction industry are also discussed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Server responsiveness and scalability are more important than ever in today’s client/server dominated network environments. Recently, researchers have begun to consider cluster-based computers using commodity hardware as an alternative to expensive specialized hardware for building scalable Web servers. In this paper, we present performance results comparing two cluster-based Web servers based on different server infrastructures: MAC-based dispatching (LSMAC) and IP-based dispatching (LSNAT). Both cluster-based server systems were implemented as application-space programs running on commodity hardware. We point out the advantages and disadvantages of both systems. We also identify when servers should be clustered and when clustering will not improve performance.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Web content hosting, in which a Web server stores and provides Web access to documents for different customers, is becoming increasingly common. For example, a web server can host webpages for several different companies and individuals. Traditionally, Web Service Providers (WSPs) provide all customers with the same level of performance (best-effort service). Most service differentiation has been in the pricing structure (individual vs. business rates) or the connectivity type (dial-up access vs. leased line, etc.). This report presents DiffServer, a program that implements two simple, server-side, application-level mechanisms (server-centric and client-centric) to provide different levels of web service. The results of the experiments show that there is not much overhead due to the addition of this additional layer of abstraction between the client and the Apache web server under light load conditions. Also, the average waiting time for high priority requests decreases significantly after they are assigned priorities as compared to a FIFO approach.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Session Initiation Protocol (SIP) is an application-layer control protocol standardized by the IETF for creating, modifying and terminating multimedia sessions. With the increasing use of SIP in large deployments, the current SIP design cannot handle overload effectively, which may cause SIP networks to suffer from congestion collapse under heavy offered load. This paper introduces a distributed end-to-end overload control (DEOC) mechanism, which is deployed at the edge servers of SIP networks and is easy to implement. By applying overload control closest to the source of traf?c, DEOC can keep high throughput for SIP networks even when the offered load exceeds the capacity of the network. Besides, it responds quickly to the sudden variations of the offered load and achieves good fairness. Theoretic analysis and extensive simulations verify that DEOC is effective in controlling overload of SIP networks.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Session Initiation Protocol (SIP) has been adopted by the IETF as the control protocol for creating, modifying and terminating multimedia sessions. Overload occurs in SIP networks when SIP servers have insufficient resources to handle received messages. Under overload, SIP networks may suffer from congestion collapse due to current ineffective SIP overload control mechanisms. This paper introduces a probe-based end-to-end overload control (PEOC) mechanism, which is deployed at the edge servers of SIP networks and is easy to implement. By probing the SIP network with SIP messages, PEOC estimates the network load and controls the traffic admitted to the network according to the estimated load. Theoretic analysis and extensive simulations verify that PEOC can keep high throughput for SIP networks even when the offered load exceeds the capacity of the network. Besides, it can respond quickly to the sudden variations of the offered load and achieve good fairness.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Esta tesis se centra en el análisis de dos aspectos complementarios de la ciberdelincuencia (es decir, el crimen perpetrado a través de la red para ganar dinero). Estos dos aspectos son las máquinas infectadas utilizadas para obtener beneficios económicos de la delincuencia a través de diferentes acciones (como por ejemplo, clickfraud, DDoS, correo no deseado) y la infraestructura de servidores utilizados para gestionar estas máquinas (por ejemplo, C & C, servidores explotadores, servidores de monetización, redirectores). En la primera parte se investiga la exposición a las amenazas de los ordenadores victimas. Para realizar este análisis hemos utilizado los metadatos contenidos en WINE-BR conjunto de datos de Symantec. Este conjunto de datos contiene metadatos de instalación de ficheros ejecutables (por ejemplo, hash del fichero, su editor, fecha de instalación, nombre del fichero, la versión del fichero) proveniente de 8,4 millones de usuarios de Windows. Hemos asociado estos metadatos con las vulnerabilidades en el National Vulnerability Database (NVD) y en el Opens Sourced Vulnerability Database (OSVDB) con el fin de realizar un seguimiento de la decadencia de la vulnerabilidad en el tiempo y observar la rapidez de los usuarios a remiendar sus sistemas y, por tanto, su exposición a posibles ataques. Hemos identificado 3 factores que pueden influir en la actividad de parches de ordenadores victimas: código compartido, el tipo de usuario, exploits. Presentamos 2 nuevos ataques contra el código compartido y un análisis de cómo el conocimiento usuarios y la disponibilidad de exploit influyen en la actividad de aplicación de parches. Para las 80 vulnerabilidades en nuestra base de datos que afectan código compartido entre dos aplicaciones, el tiempo entre el parche libera en las diferentes aplicaciones es hasta 118 das (con una mediana de 11 das) En la segunda parte se proponen nuevas técnicas de sondeo activos para detectar y analizar las infraestructuras de servidores maliciosos. Aprovechamos técnicas de sondaje activo, para detectar servidores maliciosos en el internet. Empezamos con el análisis y la detección de operaciones de servidores explotadores. Como una operación identificamos los servidores que son controlados por las mismas personas y, posiblemente, participan en la misma campaña de infección. Hemos analizado un total de 500 servidores explotadores durante un período de 1 año, donde 2/3 de las operaciones tenían un único servidor y 1/2 por varios servidores. Hemos desarrollado la técnica para detectar servidores explotadores a diferentes tipologías de servidores, (por ejemplo, C & C, servidores de monetización, redirectores) y hemos logrado escala de Internet de sondeo para las distintas categorías de servidores maliciosos. Estas nuevas técnicas se han incorporado en una nueva herramienta llamada CyberProbe. Para detectar estos servidores hemos desarrollado una novedosa técnica llamada Adversarial Fingerprint Generation, que es una metodología para generar un modelo único de solicitud-respuesta para identificar la familia de servidores (es decir, el tipo y la operación que el servidor apartenece). A partir de una fichero de malware y un servidor activo de una determinada familia, CyberProbe puede generar un fingerprint válido para detectar todos los servidores vivos de esa familia. Hemos realizado 11 exploraciones en todo el Internet detectando 151 servidores maliciosos, de estos 151 servidores 75% son desconocidos a bases de datos publicas de servidores maliciosos. Otra cuestión que se plantea mientras se hace la detección de servidores maliciosos es que algunos de estos servidores podrán estar ocultos detrás de un proxy inverso silente. Para identificar la prevalencia de esta configuración de red y mejorar el capacidades de CyberProbe hemos desarrollado RevProbe una nueva herramienta a través del aprovechamiento de leakages en la configuración de la Web proxies inversa puede detectar proxies inversos. RevProbe identifica que el 16% de direcciones IP maliciosas activas analizadas corresponden a proxies inversos, que el 92% de ellos son silenciosos en comparación con 55% para los proxies inversos benignos, y que son utilizado principalmente para equilibrio de carga a través de múltiples servidores. ABSTRACT In this dissertation we investigate two fundamental aspects of cybercrime: the infection of machines used to monetize the crime and the malicious server infrastructures that are used to manage the infected machines. In the first part of this dissertation, we analyze how fast software vendors apply patches to secure client applications, identifying shared code as an important factor in patch deployment. Shared code is code present in multiple programs. When a vulnerability affects shared code the usual linear vulnerability life cycle is not anymore effective to describe how the patch deployment takes place. In this work we show which are the consequences of shared code vulnerabilities and we demonstrate two novel attacks that can be used to exploit this condition. In the second part of this dissertation we analyze malicious server infrastructures, our contributions are: a technique to cluster exploit server operations, a tool named CyberProbe to perform large scale detection of different malicious servers categories, and RevProbe a tool that detects silent reverse proxies. We start by identifying exploit server operations, that are, exploit servers managed by the same people. We investigate a total of 500 exploit servers over a period of more 13 months. We have collected malware from these servers and all the metadata related to the communication with the servers. Thanks to this metadata we have extracted different features to group together servers managed by the same entity (i.e., exploit server operation), we have discovered that 2/3 of the operations have a single server while 1/3 have multiple servers. Next, we present CyberProbe a tool that detects different malicious server types through a novel technique called adversarial fingerprint generation (AFG). The idea behind CyberProbe’s AFG is to run some piece of malware and observe its network communication towards malicious servers. Then it replays this communication to the malicious server and outputs a fingerprint (i.e. a port selection function, a probe generation function and a signature generation function). Once the fingerprint is generated CyberProbe scans the Internet with the fingerprint and finds all the servers of a given family. We have performed a total of 11 Internet wide scans finding 151 new servers starting with 15 seed servers. This gives to CyberProbe a 10 times amplification factor. Moreover we have compared CyberProbe with existing blacklists on the internet finding that only 40% of the server detected by CyberProbe were listed. To enhance the capabilities of CyberProbe we have developed RevProbe, a reverse proxy detection tool that can be integrated with CyberProbe to allow precise detection of silent reverse proxies used to hide malicious servers. RevProbe leverages leakage based detection techniques to detect if a malicious server is hidden behind a silent reverse proxy and the infrastructure of servers behind it. At the core of RevProbe is the analysis of differences in the traffic by interacting with a remote server.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An ontology is increasingly becoming an essential tool for solving problems in many research areas. The ontology is a complex information object. It can contain millions of concepts in complex relationships. When we want to manage complex information objects, we generally turn to information systems technology. An information system intended to manage ontology is called an ontology server. The ontology server technology is at the time of writing quite immature. Therefore, this paper reviews and compares the main ontology servers that have been reported in the literatures. As a result, we point out several research questions related to server technology

Relevância:

20.00% 20.00%

Publicador:

Resumo:

How are innovative new business models established if organizations constantly compare themselves against existing criteria and expectations? The objective is to address this question from the perspective of innovators and their ability to redefine established expectations and evaluation criteria. The research questions ask whether there are discernible patterns of discursive action through which innovators theorize institutional change and what role such theorizations play for mobilizing support and realizing change projects. These questions are investigated through a case study on a critical area of enterprise computing software, Java application servers. In the present case, business practices and models were already well established among incumbents with critical market areas allocated to few dominant firms. Fringe players started experimenting with a new business approach of selling services around freely available opensource application servers. While most new players struggled, one new entrant succeeded in leading incumbents to adopt and compete on the new model. The case demonstrates that innovative and substantially new models and practices are established in organizational fields when innovators are able to refine expectations and evaluation criteria within an organisational field. The study addresses the theoretical paradox of embedded agency. Actors who are embedded in prevailing institutional logics and structures find it hard to perceive potentially disruptive opportunities that fall outside existing ways of doing things. Changing prevailing institutional logics and structures requires strategic and institutional work aimed at overcoming barriers to innovation. The study addresses this problem through the lens of (new) institutional theory. This discourse methodology traces the process through which innovators were able to establish a new social and business model in the field.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper describes the design, implementation and evaluation of AX Host, a custom surrogate host for ActiveX in-process servers. AX Host aims to give ActiveX client applications improved stability by using software fault isolation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

There is a rich history of social science research centering on racial inequalities that continue to be observed across various markets (e.g., labor, housing, and credit markets) and social milieus. Existing research on racial discrimination in consumer markets, however, is relatively scarce and that which has been done has disproportionately focused on consumers as the victims of race-based mistreatment. As such, we know relatively little about how consumers contribute to inequalities in their roles as perpetrators of racial discrimination. In response, in this paper we elaborate on a line of research that is only in its’ infancy stages of development and yet is ripe with opportunities to advance the literature on consumer racial discrimination and racial earnings inequities among tip dependent employees in the United States. Specifically, we analyze data derived from a large exit survey of restaurant consumers (n=378) in an attempt to replicate, extend, and further explore the recently documented effect of service providers’ race on restaurant consumers’ tipping decisions. Our results indicate that both White and Black restaurant customers discriminate against Black servers by tipping them less than their White coworkers. Importantly, we find no evidence that this Black tip penalty is the result of interracial differences in service skills possessed by Black and White servers. We conclude by delineating directions for future research in this neglected but salient area study.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: Protein structural alignment is one of the most fundamental and crucial areas of research in the domain of computational structural biology. Comparison of a protein structure with known structures helps to classify it as a new or belonging to a known group of proteins. This, in turn, is useful to determine the function of protein, its evolutionary relationship with other protein molecules and grasping principles underlying protein architecture and folding. Results: A large number of protein structure alignment methods are available. Each protein structure alignment tool has its own strengths andweaknesses that need to be highlighted.We compared and presented results of six most popular and publically available servers for protein structure comparison. These web-based servers were compared with the respect to functionality (features provided by these servers) and accuracy (how well the structural comparison is performed). The CATH was used as a reference. The results showed that overall CE was top performer. DALI and PhyreStorm showed similar results whereas PDBeFold showed the lowest performance. In case of few secondary structural elements, CE, DALI and PhyreStorm gave 100% success rate. Conclusion: Overall none of the structural alignment servers showed 100% success rate. Studies of overall performance, effect of mainly alpha and effect of mainly beta showed consistent performance. CE, DALI, FatCat and PhyreStorm showed more than 90% success rate.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In recent years, we have witnessed substantial exploitation of real-time streaming applications, such as video surveillance system on road crosses of a city. So far, real world applications mainly rely on the traditional well-known client-server and peer-to-peer schemes as the fundamental mechanism for communication. However, due to the limited resources on each terminal device in the applications, these two schemes cannot well leverage the processing capability between the source and destination of the video traffic, which leads to limited streaming services. For this reason, many QoS sensitive application cannot be supported in the real world. In this paper, we are motivated to address this problem by proposing a novel multi-server based framework. In this framework, multiple servers collaborate with each other to form a virtual server (also called cloud-server), and provide high-quality services such as real-time streams delivery and storage. Based on this framework, we further introduce a (1-?) approximation algorithm to solve the NP-complete "maximum services"(MS) problem with the intention of handling large number of streaming flows originated by networks and maximizing the total number of services. Moreover, in order to backup the streaming data for later retrieval, based on the framework, an algorithm is proposed to implement backups and maximize streaming flows simultaneously. We conduct a series of experiments based on simulations to evaluate the performance of the newly proposed framework. We also compare our scheme to several traditional solutions. The results suggest that our proposed scheme significantly outperforms the traditional solutions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Single walled carbon nanotubes (SWNTs) were incorporated in polymer nanocomposites based on poly(3-octylthiophene) (P3OT), thermoplastic polyurethane (TPU) or a blend of them. Thermogravimetry demonstrated the success of the purification procedure employed in the chemical treatment of SWNTs prior to composite preparation. Stable dispersions of SWNTs in chloroform were obtained by non-covalent interactions with the dissolved polymers. Composites exhibited glass transitions, melting temperatures and heat of fusion which changed in relation to pure polymers. This behavior is discussed as associated to interactions between nanotubes and polymers. The conductivity at room temperature of the blend (TPU-P3OT) with SWNT is higher than the P3OT/SWNT composite.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents a prototype tracking system for tracking people in enclosed indoor environments where there is a high rate of occlusions. The system uses a stereo camera for acquisition, and is capable of disambiguating occlusions using a combination of depth map analysis, a two step ellipse fitting people detection process, the use of motion models and Kalman filters and a novel fit metric, based on computationally simple object statistics. Testing shows that our fit metric outperforms commonly used position based metrics and histogram based metrics, resulting in more accurate tracking of people.