817 resultados para Proxy servers
Resumo:
Aquest projecte es fonamenta en l'estudi del protocol HTTP, el qual funciona amb l'estàndard client-servidor, la seva estructura, funcionament, l'explicació d'un exemple i el tipus de dades que treballen les peticions i respostes, i la seva aplicació en el cas dels servidors intermediaris (
Resumo:
Web service-based application is an architectural style, where a collection of Web services communicates to each other to execute processes. With the popularity increase of developing Web service-based application and once Web services may change, in terms of functional and non-functional Quality of Service (QoS), we need mechanisms to monitor, diagnose, and repair Web services into a Web Application. This work presents a description of self-healing architecture that deals with these mechanisms. Other contributions of this paper are using the proxy server to measure Web service QoS values and to employ some strategies to recovery the effects from misbehaved Web services. © 2008 IEEE.
Resumo:
Esta tesis se centra en el análisis de dos aspectos complementarios de la ciberdelincuencia (es decir, el crimen perpetrado a través de la red para ganar dinero). Estos dos aspectos son las máquinas infectadas utilizadas para obtener beneficios económicos de la delincuencia a través de diferentes acciones (como por ejemplo, clickfraud, DDoS, correo no deseado) y la infraestructura de servidores utilizados para gestionar estas máquinas (por ejemplo, C & C, servidores explotadores, servidores de monetización, redirectores). En la primera parte se investiga la exposición a las amenazas de los ordenadores victimas. Para realizar este análisis hemos utilizado los metadatos contenidos en WINE-BR conjunto de datos de Symantec. Este conjunto de datos contiene metadatos de instalación de ficheros ejecutables (por ejemplo, hash del fichero, su editor, fecha de instalación, nombre del fichero, la versión del fichero) proveniente de 8,4 millones de usuarios de Windows. Hemos asociado estos metadatos con las vulnerabilidades en el National Vulnerability Database (NVD) y en el Opens Sourced Vulnerability Database (OSVDB) con el fin de realizar un seguimiento de la decadencia de la vulnerabilidad en el tiempo y observar la rapidez de los usuarios a remiendar sus sistemas y, por tanto, su exposición a posibles ataques. Hemos identificado 3 factores que pueden influir en la actividad de parches de ordenadores victimas: código compartido, el tipo de usuario, exploits. Presentamos 2 nuevos ataques contra el código compartido y un análisis de cómo el conocimiento usuarios y la disponibilidad de exploit influyen en la actividad de aplicación de parches. Para las 80 vulnerabilidades en nuestra base de datos que afectan código compartido entre dos aplicaciones, el tiempo entre el parche libera en las diferentes aplicaciones es hasta 118 das (con una mediana de 11 das) En la segunda parte se proponen nuevas técnicas de sondeo activos para detectar y analizar las infraestructuras de servidores maliciosos. Aprovechamos técnicas de sondaje activo, para detectar servidores maliciosos en el internet. Empezamos con el análisis y la detección de operaciones de servidores explotadores. Como una operación identificamos los servidores que son controlados por las mismas personas y, posiblemente, participan en la misma campaña de infección. Hemos analizado un total de 500 servidores explotadores durante un período de 1 año, donde 2/3 de las operaciones tenían un único servidor y 1/2 por varios servidores. Hemos desarrollado la técnica para detectar servidores explotadores a diferentes tipologías de servidores, (por ejemplo, C & C, servidores de monetización, redirectores) y hemos logrado escala de Internet de sondeo para las distintas categorías de servidores maliciosos. Estas nuevas técnicas se han incorporado en una nueva herramienta llamada CyberProbe. Para detectar estos servidores hemos desarrollado una novedosa técnica llamada Adversarial Fingerprint Generation, que es una metodología para generar un modelo único de solicitud-respuesta para identificar la familia de servidores (es decir, el tipo y la operación que el servidor apartenece). A partir de una fichero de malware y un servidor activo de una determinada familia, CyberProbe puede generar un fingerprint válido para detectar todos los servidores vivos de esa familia. Hemos realizado 11 exploraciones en todo el Internet detectando 151 servidores maliciosos, de estos 151 servidores 75% son desconocidos a bases de datos publicas de servidores maliciosos. Otra cuestión que se plantea mientras se hace la detección de servidores maliciosos es que algunos de estos servidores podrán estar ocultos detrás de un proxy inverso silente. Para identificar la prevalencia de esta configuración de red y mejorar el capacidades de CyberProbe hemos desarrollado RevProbe una nueva herramienta a través del aprovechamiento de leakages en la configuración de la Web proxies inversa puede detectar proxies inversos. RevProbe identifica que el 16% de direcciones IP maliciosas activas analizadas corresponden a proxies inversos, que el 92% de ellos son silenciosos en comparación con 55% para los proxies inversos benignos, y que son utilizado principalmente para equilibrio de carga a través de múltiples servidores. ABSTRACT In this dissertation we investigate two fundamental aspects of cybercrime: the infection of machines used to monetize the crime and the malicious server infrastructures that are used to manage the infected machines. In the first part of this dissertation, we analyze how fast software vendors apply patches to secure client applications, identifying shared code as an important factor in patch deployment. Shared code is code present in multiple programs. When a vulnerability affects shared code the usual linear vulnerability life cycle is not anymore effective to describe how the patch deployment takes place. In this work we show which are the consequences of shared code vulnerabilities and we demonstrate two novel attacks that can be used to exploit this condition. In the second part of this dissertation we analyze malicious server infrastructures, our contributions are: a technique to cluster exploit server operations, a tool named CyberProbe to perform large scale detection of different malicious servers categories, and RevProbe a tool that detects silent reverse proxies. We start by identifying exploit server operations, that are, exploit servers managed by the same people. We investigate a total of 500 exploit servers over a period of more 13 months. We have collected malware from these servers and all the metadata related to the communication with the servers. Thanks to this metadata we have extracted different features to group together servers managed by the same entity (i.e., exploit server operation), we have discovered that 2/3 of the operations have a single server while 1/3 have multiple servers. Next, we present CyberProbe a tool that detects different malicious server types through a novel technique called adversarial fingerprint generation (AFG). The idea behind CyberProbe’s AFG is to run some piece of malware and observe its network communication towards malicious servers. Then it replays this communication to the malicious server and outputs a fingerprint (i.e. a port selection function, a probe generation function and a signature generation function). Once the fingerprint is generated CyberProbe scans the Internet with the fingerprint and finds all the servers of a given family. We have performed a total of 11 Internet wide scans finding 151 new servers starting with 15 seed servers. This gives to CyberProbe a 10 times amplification factor. Moreover we have compared CyberProbe with existing blacklists on the internet finding that only 40% of the server detected by CyberProbe were listed. To enhance the capabilities of CyberProbe we have developed RevProbe, a reverse proxy detection tool that can be integrated with CyberProbe to allow precise detection of silent reverse proxies used to hide malicious servers. RevProbe leverages leakage based detection techniques to detect if a malicious server is hidden behind a silent reverse proxy and the infrastructure of servers behind it. At the core of RevProbe is the analysis of differences in the traffic by interacting with a remote server.
Resumo:
Mass spectrometric uranium-series dating and C-O isotopic analysis of a stalagmite from Lynds Cave, northern Tasmania, Australia provide a high-resolution record of regional climate change between 5100 and 9200 yr before present (BP). Combined delta(18)O, delta(13)C, growth rate, initial U-234/U-238 and physical property (color, transparency and porosity) records allow recognition of seven climatic stages: Stage I ( > 9080 yr BP) - a relatively dry period at the beginning of stalagmite growth evidenced by elevated U-234/U-238; Stage II (9080-8600 yr BP) - a period of unstable climate characterized by high-frequency variability in temperature and bio-productivity; Stage 111 (8600-8000 yr BP) - a period of stable and moderate precipitation and stable and high bio-productivity, with a continuously rising temperature; Stage IV (8000-7400 yr BP) - the warmest period with high evaporation and low effective precipitation (rainfall less evaporation); Stage V (7400-7000 yr BP) - the wettest period with highest stalagmite growth and enhanced but unstable bio-productivity; Stage VI (7000-6600 yr BP) - a period with a significantly reduced precipitation and bio-productivity without noticeable change in temperature; Stage VII (6600-5100 yr BP) - a period of lowest temperature and precipitation marking a significant climatic deterioration. Overall, the records suggest that the warmest climate occurred between 8000 and 7400 yr BP, followed by a wettest period between 7400 and 7000 yr BP. These are broadly correlated with the so-called 'Mid Holocene optimum' previously proposed using pollen and lake level records. However, the timing and resolution of the speleothem. record from Lynds Cave are significantly higher than in both the pollen and lake level records. This allows us to correlate the abrupt change in physical property, delta(18)O, delta(13)C, growth rate, and initial U-234/U-238 of the stalagmite at similar to8000 yr BP with a global climatic event at Early-Mid Holocene transition. (C) 2001 Elsevier Science B.V. All rights reserved.
Resumo:
Human Computer Interaction (HCl) is to interaction between computers and each person. And context-aware (CA) is very important one of HCI composition. In particular, if there are sequential or continuous tasks between users and devices, among users, and among devices etc, it is important to decide the next action using right CA. And to take perfect decision we have to get together all CA into a structure. We define that structure is Context-Aware Matrix (CAM) in this article. However to make exact decision is too hard for some problems like low accuracy, overhead and bad context by attacker etc. Many researcher has been studying to solve these problems. Moreover, still it has weak point HCI using in safety. In this Article, we propose CAM making include best selecting Server in each area. As a result, moving users could be taken the best way.
Resumo:
RESUMO - Introdução - Com o presente projecto de investigação pretendeu-se estudar o financiamento por capitação ajustado pelo risco em contexto de integração vertical de cuidados de saúde, recorrendo particularmente a informação sobre o consumo de medicamentos em ambulatório como proxy da carga de doença. No nosso país, factores como a expansão de estruturas de oferta verticalmente integradas, inadequação histórica da sua forma de pagamento e a recente possibilidade de dispor de informação sobre o consumo de medicamentos de ambulatório em bases de dados informatizadas são três fortes motivos para o desenvolvimento de conhecimento associado a esta temática. Metodologia - Este trabalho compreende duas fases principais: i) a adaptação e aplicação de um modelo de consumo de medicamentos que permite estimar a carga de doença em ambulatório (designado de PRx). Nesta fase foi necessário realizar um trabalho de selecção, estruturação e classificação do modelo. A sua aplicação envolveu a utilização de bases de dados informatizadas de consumos com medicamentos nos anos de 2007 e 2008 para a região de Saúde do Alentejo; ii) na segunda fase foram simulados três modelos de financiamento alternativos que foram propostos para financiar as ULS em Portugal. Particularmente foram analisadas as dimensões e variáveis de ajustamento pelo risco (índices de mortalidade, morbilidade e custos per capita), sua ponderação relativa e consequente impacto financeiro. Resultados - Com o desenvolvimento do modelo PRx estima-se que 36% dos residentes na região Alentejo têm pelo menos uma doença crónica, sendo a capacidade de estimação do modelo no que respeita aos consumos de medicamentos na ordem dos 0,45 (R2). Este modelo revelou constituir uma alternativa a fontes de informação tradicionais como são os casos de outros estudos internacionais ou o Inquérito Nacional de Saúde. A consideração dos valores do PRx para efeitos de financiamento per capita introduz alterações face a outros modelos propostos neste âmbito. Após a análise dos montantes de financiamento entre os cenários alternativos, obtendo os modelos 1 e 2 níveis de concordância por percentil mais próximos entre si comparativamente ao modelo 3, seleccionou-se o modelo 1 como o mais adequado para a nossa realidade. Conclusão - A aplicação do modelo PRx numa região de saúde permitiu concluir em função dos resultados alcançados, que já existe a possibilidade de estruturação e operacionalização de um modelo que permite estimar a carga de doença em ambulatório a partir de informação relativa ao seu perfil de consumo de medicamentos dos utentes. A utilização desta informação para efeitos de financiamento de organizações de saúde verticalmente integradas provoca uma variação no seu actual nível de financiamento. Entendendo este estudo como um ponto de partida onde apenas uma parte da presente temática ficará definida, outras questões estruturantes do actual sistema de financiamento não deverão também ser olvidadas neste contexto. ------- ABSTRACT - Introduction - The main goal of this study was the development of a risk adjustment model for financing integrated delivery systems (IDS) in Portugal. The recent improvement of patient records, mainly at primary care level, the historical inadequacy of payment models and the increasing number of IDS were three important factors that drove us to develop new approaches for risk adjustment in our country. Methods - The work was divided in two steps: the development of a pharmacy-based model in Portugal and the proposal of a risk adjustment model for financing IDS. In the first step an expert panel was specially formed to classify more than 33.000 codes included in Portuguese pharmacy national codes into 33 chronic conditions. The study included population of Alentejo Region in Portugal (N=441.550 patients) during 2007 and 2008. Using pharmacy data extracted from three databases: prescription, private pharmacies and hospital ambulatory pharmacies we estimated a regression model including Potential Years of Life Lost, Complexity, Severity and PRx information as dependent variables to assess total cost as the independent variable. This healthcare financing model was compared with other two models proposed for IDS. Results - The more prevalent chronic conditions are cardiovascular (34%), psychiatric disorders (10%) and diabetes (10%). These results are also consistent with the National Health Survey. Apparently the model presents some limitations in identifying patients with rheumatic conditions, since it underestimates prevalence and future drug expenditure. We obtained a R2 value of 0,45, which constitutes a good value comparing with the state of the art. After testing three scenarios we propose a model for financing IDS in Portugal. Conclusion - Drug information is a good alternative to diagnosis in determining morbidity level in a population basis through ambulatory care data. This model offers potential benefits to estimate chronic conditions and future drug costs in the Portuguese healthcare system. This information could be important to resource allocation decision process, especially concerning risk adjustment and healthcare financing.
Resumo:
Dissertação para obtenção do Grau de Mestre em Engenharia Eletrotécnica
Resumo:
A Work Project, presented as part of the requirements for the Award of a Masters Degree in Finance from the NOVA – School of Business and Economics
Resumo:
In traumatic financial times, both shareholders and the media promptly blame companies for lack of decent corporate governance mechanisms. Proxy statement proposals have increasingly been used by the more active shareholders as to vindicate managers to correct anomalies and restore financial markets’ confidence. I examine the proposals of the largest companies in the S&P 500 index after the Lehmann Brothers crash and their effect on stock prices. Proposals initiated by shareholders negatively impact the company’s stock price, particularly if the proposers are unions, pension funds and institutional investors. Also, I find corporate governance proposals to harm firm’s market performance, unlike compensation and social policy proposals whose effects are intangible. The exception to these disappointing attempts to improve companies’ conduct relies on proposals shared by several investors.
Resumo:
Many regions of the world, including inland lakes, present with suboptimal conditions for the remotely sensed retrieval of optical signals, thus challenging the limits of available satellite data-processing tools, such as atmospheric correction models (ACM) and water constituent-retrieval (WCR) algorithms. Working in such regions, however, can improve our understanding of remote-sensing tools and their applicabil- ity in new contexts, in addition to potentially offering useful information about aquatic ecology. Here, we assess and compare 32 combinations of two ACMs, two WCRs, and three binary categories of data quality standards to optimize a remotely sensed proxy of plankton biomass in Lake Kivu. Each parameter set is compared against the available ground-truth match-ups using Spearman's right-tailed ρ. Focusing on the best sets from each ACM-WCR combination, their performances are discussed with regard to data distribution, sample size, spatial completeness, and seasonality. The results of this study may be of interest both for ecological studies on Lake Kivu and for epidemio- logical studies of disease, such as cholera, the dynamics of which has been associated with plankton biomass in other regions of the world.
Resumo:
Tässä diplomityössä perehdytään WAP:in Push -viitekehykseen. WAP-standardit määrittelevät kuinka Internet-tyyppisiä palveluita, joita voidaan käyttää erilaisia mobiileja päätelaiteitteita käyttäen, toteutetaan tehokkaalla ja verkkoteknologiasta riippumattomalla tavalla. WAP pohjautuu Internet:iin, mutta huomioi pienten päätelaitteiden ja mobiiliverkkojen rajoitukset ja erikoisominaisuudet. WAP Push viitekehys määrittelee verkon aloittaman palvelusisällön toimittamisen. Työn teoriaosassa käydään läpi yleinen WAP-arkkitehtuuri ja WAP-protokollapino käyttäen vertailukohtina lanka-Internetin arkkitehtuuria ja protokollapinoa. Edellistä perustana käyttäen tutustaan WAP Push -viitekehykseen. Käytännönosassa kuvataan WAP Push -välityspalvelimen suunnittelu ja kehitystyö. WAP Push -välityspalvelin on keskeinen verkkoelementti WAP Push -viitekehyksessä. WAP Push -välityspalvelin yhdistää Internetin ja mobiiliverkon tavalla, joka piilottaa teknologiaeroavaisuudet Internetissä olevalta palveluntuottajalta.
Resumo:
Maintenance is a part of system development and it is possible to develop operation models for accomplishing maintenance tasks. These models can be applied to individual maintenance tasks, maintenance projects and version management. Beneficial operation models makes maintenance more effective and they assist in managing various changes. The purpose of this thesis was to develop a maintenance process which can be used to remote administer network servers. This consisted of defining those operation models and technical specifications which enable to set up, manage changes, maintain and monitor resources of information systems that are located in several different sites. At first in this thesis the needs of the process were determined and requirements were defined based on those needs. The meaning of processes in maintenance of information systems, maintenance workflows and challenges were studied. Then current practical problems and disadvantages of maintenance work were analyzed in order to focus the development to proper issues. Because available operation models did not cover all the recent needs, new maintenance process which fulfilled the requirements was developed.
Resumo:
The aims of this study were to validate an international Health-Related Quality of Life (HRQL) instrument, to describe child self and parent-proxy assessed HRQL at child age 10 to 12 and to compare child self assessments with parent-proxy assessments and school nursing documentation. The study is part of the Schools on the Move –research project. In phase one, a cross-cultural translation and validation process was performed to develop a Finnish version of Pediatric Quality of Life Inventory™ 4.0 (PedsQL™ 4.0). The process included a two-way translation, cognitive interviews (children n=7, parents n=5) and a survey (children n=1097, parents n=999). In phase two, baseline and follow-up surveys (children n=986, parents n=710) were conducted to describe and compare the child self and parent-proxy assessed HRQL in school children between the ages 10 and 12. Phase three included two separate data, school nurse documented patient records (children n=270) and a survey (children n=986). The relation between child self assessed HRQL and school nursing documentation was evaluated. Validity and reliability of the Finnish version of PedsQL™ 4.0 was good (Child Self Report α=0.91, Parent-Proxy Report α=0.88). Children reported lower HRQL scores at the emotional (mean 76/80) than the physical (mean 85/89) health domains and significantly lower scores at the age of 10 than 12 (dMean=4, p=<0.001). Agreement between child self and parent-proxy assessment was fragile (r=0,4, p=<0.001) but increased as the child grew from age 10 to 12 years. At health check-ups, school nurses documented frequently children’s physical health, such as growth (97%) and posture (98/99%) but seldom emotional issues, such as mood (2/7%). The PedsQLTM 4.0 is a valid instrument to assess HRQL in Finnish school children although future research is recommended. Children’s emotional wellbeing needs future attention. HRQL scores increase during ages between childhood and adolescence. Concordance between child self and parent-proxy assessed HRQL is low. School nursing documentation, related to child health check-ups, is not in line with child self assessed HRQL and emotional issues need more attention.