641 resultados para LEVERAGE
Resumo:
Many systems and applications are continuously producing events. These events are used to record the status of the system and trace the behaviors of the systems. By examining these events, system administrators can check the potential problems of these systems. If the temporal dynamics of the systems are further investigated, the underlying patterns can be discovered. The uncovered knowledge can be leveraged to predict the future system behaviors or to mitigate the potential risks of the systems. Moreover, the system administrators can utilize the temporal patterns to set up event management rules to make the system more intelligent. With the popularity of data mining techniques in recent years, these events grad- ually become more and more useful. Despite the recent advances of the data mining techniques, the application to system event mining is still in a rudimentary stage. Most of works are still focusing on episodes mining or frequent pattern discovering. These methods are unable to provide a brief yet comprehensible summary to reveal the valuable information from the high level perspective. Moreover, these methods provide little actionable knowledge to help the system administrators to better man- age the systems. To better make use of the recorded events, more practical techniques are required. From the perspective of data mining, three correlated directions are considered to be helpful for system management: (1) Provide concise yet comprehensive summaries about the running status of the systems; (2) Make the systems more intelligence and autonomous; (3) Effectively detect the abnormal behaviors of the systems. Due to the richness of the event logs, all these directions can be solved in the data-driven manner. And in this way, the robustness of the systems can be enhanced and the goal of autonomous management can be approached. This dissertation mainly focuses on the foregoing directions that leverage tem- poral mining techniques to facilitate system management. More specifically, three concrete topics will be discussed, including event, resource demand prediction, and streaming anomaly detection. Besides the theoretic contributions, the experimental evaluation will also be presented to demonstrate the effectiveness and efficacy of the corresponding solutions.
Resumo:
Kernel-level malware is one of the most dangerous threats to the security of users on the Internet, so there is an urgent need for its detection. The most popular detection approach is misuse-based detection. However, it cannot catch up with today's advanced malware that increasingly apply polymorphism and obfuscation. In this thesis, we present our integrity-based detection for kernel-level malware, which does not rely on the specific features of malware. ^ We have developed an integrity analysis system that can derive and monitor integrity properties for commodity operating systems kernels. In our system, we focus on two classes of integrity properties: data invariants and integrity of Kernel Queue (KQ) requests. ^ We adopt static analysis for data invariant detection and overcome several technical challenges: field-sensitivity, array-sensitivity, and pointer analysis. We identify data invariants that are critical to system runtime integrity from Linux kernel 2.4.32 and Windows Research Kernel (WRK) with very low false positive rate and very low false negative rate. We then develop an Invariant Monitor to guard these data invariants against real-world malware. In our experiment, we are able to use Invariant Monitor to detect ten real-world Linux rootkits and nine real-world Windows malware and one synthetic Windows malware. ^ We leverage static and dynamic analysis of kernel and device drivers to learn the legitimate KQ requests. Based on the learned KQ requests, we build KQguard to protect KQs. At runtime, KQguard rejects all the unknown KQ requests that cannot be validated. We apply KQguard on WRK and Linux kernel, and extensive experimental evaluation shows that KQguard is efficient (up to 5.6% overhead) and effective (capable of achieving zero false positives against representative benign workloads after appropriate training and very low false negatives against 125 real-world malware and nine synthetic attacks). ^ In our system, Invariant Monitor and KQguard cooperate together to protect data invariants and KQs in the target kernel. By monitoring these integrity properties, we can detect malware by its violation of these integrity properties during execution.^
Resumo:
The increasing amount of available semistructured data demands efficient mechanisms to store, process, and search an enormous corpus of data to encourage its global adoption. Current techniques to store semistructured documents either map them to relational databases, or use a combination of flat files and indexes. These two approaches result in a mismatch between the tree-structure of semistructured data and the access characteristics of the underlying storage devices. Furthermore, the inefficiency of XML parsing methods has slowed down the large-scale adoption of XML into actual system implementations. The recent development of lazy parsing techniques is a major step towards improving this situation, but lazy parsers still have significant drawbacks that undermine the massive adoption of XML. Once the processing (storage and parsing) issues for semistructured data have been addressed, another key challenge to leverage semistructured data is to perform effective information discovery on such data. Previous works have addressed this problem in a generic (i.e. domain independent) way, but this process can be improved if knowledge about the specific domain is taken into consideration. This dissertation had two general goals: The first goal was to devise novel techniques to efficiently store and process semistructured documents. This goal had two specific aims: We proposed a method for storing semistructured documents that maps the physical characteristics of the documents to the geometrical layout of hard drives. We developed a Double-Lazy Parser for semistructured documents which introduces lazy behavior in both the pre-parsing and progressive parsing phases of the standard Document Object Model's parsing mechanism. The second goal was to construct a user-friendly and efficient engine for performing Information Discovery over domain-specific semistructured documents. This goal also had two aims: We presented a framework that exploits the domain-specific knowledge to improve the quality of the information discovery process by incorporating domain ontologies. We also proposed meaningful evaluation metrics to compare the results of search systems over semistructured documents.
Resumo:
Developed countries give foreign assistance for many reasons, one of which is the protection of national interests. Foreign aid gives a donor country leverage in international relations and is used as a tool of foreign policy. The United States and Japan are the two largest aid donors in the world. Each of these countries exert influence over specific regions through foreign assistance. Although the national interests of each country are different, both use foreign aid to protect these interests. This thesis discusses the means by which the United States and Japan use foreign aid in foreign policy. It looks specifically at U.S. food aid to Central America and Japanese aid to Asia.
Resumo:
This study investigated the impact caused by events horizontal mergers and acquisitions (M&As) horizontal, in the stock returns of the participating companies and competitors regarding the creation or destruction of value for those firms in Brazil, from 2001 to 2012. For this, first was used the event study methodology to estimate abnormal returns in stock prices; after was conducted an analysis multiple regression. The results of the event study showed that using sub-periods for the data, before and after the crisis period, the effects were different for the target-before negative, after positive. Regarding the acquirer and competitors, the results were constant. For acquirer firms, the returns were close to zero, while for the competitors were negative. Furthermore, the regression results regarding the bidder showed that firms invested in processes of M&As to obtain a further increase its efficiency. Furthermore, this study indicated that the leverage of the bidder plays is important for creating value in acquisitions, when they has a higher Tobin’s Q. The results of target firms showed that a small firm had a better return than large firm did.
Resumo:
Beamforming is a technique widely used in various fields. With the aid of an antenna array, the beamforming aims to minimize the contribution of unknown interferents directions, while capturing the desired signal in a given direction. In this thesis are proposed beamforming techniques using Reinforcement Learning (RL) through the Q-Learning algorithm in antennas array. One proposal is to use RL to find the optimal policy selection between the beamforming (BF) and power control (PC) in order to better leverage the individual characteristics of each of them for a certain amount of Signal to Interference plus noise Ration (SINR). Another proposal is to use RL to determine the optimal policy between blind beamforming algorithm of CMA (Constant Modulus Algorithm) and DD (Decision Direct) in multipath environments. Results from simulations showed that the RL technique could be effective in achieving na optimal of switching between different techniques.
Resumo:
Esta tesis doctoral nace con el propósito de entender, analizar y sobre todo modelizar el comportamiento estadístico de las series financieras. En este sentido, se puede afirmar que los modelos que mejor recogen las especiales características de estas series son los modelos de heterocedasticidad condicionada en tiempo discreto,si los intervalos de tiempo en los que se recogen los datos lo permiten, y en tiempo continuo si tenemos datos diarios o datos intradía. Con esta finalidad, en esta tesis se proponen distintos estimadores bayesianos para la estimación de los parámetros de los modelos GARCH en tiempo discreto (Bollerslev (1986)) y COGARCH en tiempo continuo (Kluppelberg et al. (2004)). En el capítulo 1 se introducen las características de las series financieras y se presentan los modelos ARCH, GARCH y COGARCH, así como sus principales propiedades. Mandelbrot (1963) destacó que las series financieras no presentan estacionariedad y que sus incrementos no presentan autocorrelación, aunque sus cuadrados sí están correlacionados. Señaló también que la volatilidad que presentan no es constante y que aparecen clusters de volatilidad. Observó la falta de normalidad de las series financieras, debida principalmente a su comportamiento leptocúrtico, y también destacó los efectos estacionales que presentan las series, analizando como se ven afectadas por la época del año o el día de la semana. Posteriormente Black (1976) completó la lista de características especiales incluyendo los denominados leverage effects relacionados con como las fluctuaciones positivas y negativas de los precios de los activos afectan a la volatilidad de las series de forma distinta.
Resumo:
The paper develops a novel realized matrix-exponential stochastic volatility model of multivariate returns and realized covariances that incorporates asymmetry and long memory (hereafter the RMESV-ALM model). The matrix exponential transformation guarantees the positivedefiniteness of the dynamic covariance matrix. The contribution of the paper ties in with Robert Basmann’s seminal work in terms of the estimation of highly non-linear model specifications (“Causality tests and observationally equivalent representations of econometric models”, Journal of Econometrics, 1988, 39(1-2), 69–104), especially for developing tests for leverage and spillover effects in the covariance dynamics. Efficient importance sampling is used to maximize the likelihood function of RMESV-ALM, and the finite sample properties of the quasi-maximum likelihood estimator of the parameters are analysed. Using high frequency data for three US financial assets, the new model is estimated and evaluated. The forecasting performance of the new model is compared with a novel dynamic realized matrix-exponential conditional covariance model. The volatility and co-volatility spillovers are examined via the news impact curves and the impulse response functions from returns to volatility and co-volatility.
Resumo:
Firm’s financial information is essential to stakeholders’ decision making. Although not always financial statements show the firm’s real image. This study examines listed firms from Portugal and UK. Firms have different purposes to manipulate earnings: some strive for influencing investors’ perception about a particular company, some try to provide better position for gaining finance from credit institutions or paying less tax to tax authorities. Usually, this behaviour is induced when firms have financial problems. Consequently, the study also aims to see the impact of financial crisis on earnings management. We try to answer question how does extent of firms’ involvement in earnings management change when the world undergoes financial crisis. Furthermore, we also compare two countries with different legal forces in terms of quality of accounting to see the main differences. We used a panel data methodology to analyse financial data from 2004 till 2014 of listed firms from Portugal and UK. Beneish (1999) model was applied to categorize manipulator and non-manipulator firms. Analysing accounting information according to Beneish’s ratios, findings suggest that financial crisis had certain impact on firms’ tendency to manipulate financial results in UK although it is not statistically significant. Moreover, besides the differences between Portugal and UK, results contradict the common view of legal systems’ quality, as UK firms tend to apply more accounting techniques for manipulation than the Portuguese ones. Our main results also confirm that some UK firms manipulate ratios of receivables’ days, asset quality index, depreciation index, leverage, sales and general administrative expenses whereas Portuguese firms manipulate only receivables’ days. Finally, we also find that the main reason to manipulate results is not to influence the cost of obtained funds neither to minimize tax burden since net profit does not explain the ratios used in the Beneish model. Results suggest that the main concern to listed firms manipulate results is to influence financial investors perception.
Resumo:
Surveys can collect important data that inform policy decisions and drive social science research. Large government surveys collect information from the U.S. population on a wide range of topics, including demographics, education, employment, and lifestyle. Analysis of survey data presents unique challenges. In particular, one needs to account for missing data, for complex sampling designs, and for measurement error. Conceptually, a survey organization could spend lots of resources getting high-quality responses from a simple random sample, resulting in survey data that are easy to analyze. However, this scenario often is not realistic. To address these practical issues, survey organizations can leverage the information available from other sources of data. For example, in longitudinal studies that suffer from attrition, they can use the information from refreshment samples to correct for potential attrition bias. They can use information from known marginal distributions or survey design to improve inferences. They can use information from gold standard sources to correct for measurement error.
This thesis presents novel approaches to combining information from multiple sources that address the three problems described above.
The first method addresses nonignorable unit nonresponse and attrition in a panel survey with a refreshment sample. Panel surveys typically suffer from attrition, which can lead to biased inference when basing analysis only on cases that complete all waves of the panel. Unfortunately, the panel data alone cannot inform the extent of the bias due to attrition, so analysts must make strong and untestable assumptions about the missing data mechanism. Many panel studies also include refreshment samples, which are data collected from a random sample of new
individuals during some later wave of the panel. Refreshment samples offer information that can be utilized to correct for biases induced by nonignorable attrition while reducing reliance on strong assumptions about the attrition process. To date, these bias correction methods have not dealt with two key practical issues in panel studies: unit nonresponse in the initial wave of the panel and in the
refreshment sample itself. As we illustrate, nonignorable unit nonresponse
can significantly compromise the analyst's ability to use the refreshment samples for attrition bias correction. Thus, it is crucial for analysts to assess how sensitive their inferences---corrected for panel attrition---are to different assumptions about the nature of the unit nonresponse. We present an approach that facilitates such sensitivity analyses, both for suspected nonignorable unit nonresponse
in the initial wave and in the refreshment sample. We illustrate the approach using simulation studies and an analysis of data from the 2007-2008 Associated Press/Yahoo News election panel study.
The second method incorporates informative prior beliefs about
marginal probabilities into Bayesian latent class models for categorical data.
The basic idea is to append synthetic observations to the original data such that
(i) the empirical distributions of the desired margins match those of the prior beliefs, and (ii) the values of the remaining variables are left missing. The degree of prior uncertainty is controlled by the number of augmented records. Posterior inferences can be obtained via typical MCMC algorithms for latent class models, tailored to deal efficiently with the missing values in the concatenated data.
We illustrate the approach using a variety of simulations based on data from the American Community Survey, including an example of how augmented records can be used to fit latent class models to data from stratified samples.
The third method leverages the information from a gold standard survey to model reporting error. Survey data are subject to reporting error when respondents misunderstand the question or accidentally select the wrong response. Sometimes survey respondents knowingly select the wrong response, for example, by reporting a higher level of education than they actually have attained. We present an approach that allows an analyst to model reporting error by incorporating information from a gold standard survey. The analyst can specify various reporting error models and assess how sensitive their conclusions are to different assumptions about the reporting error process. We illustrate the approach using simulations based on data from the 1993 National Survey of College Graduates. We use the method to impute error-corrected educational attainments in the 2010 American Community Survey using the 2010 National Survey of College Graduates as the gold standard survey.
Resumo:
Kernel-level malware is one of the most dangerous threats to the security of users on the Internet, so there is an urgent need for its detection. The most popular detection approach is misuse-based detection. However, it cannot catch up with today's advanced malware that increasingly apply polymorphism and obfuscation. In this thesis, we present our integrity-based detection for kernel-level malware, which does not rely on the specific features of malware. We have developed an integrity analysis system that can derive and monitor integrity properties for commodity operating systems kernels. In our system, we focus on two classes of integrity properties: data invariants and integrity of Kernel Queue (KQ) requests. We adopt static analysis for data invariant detection and overcome several technical challenges: field-sensitivity, array-sensitivity, and pointer analysis. We identify data invariants that are critical to system runtime integrity from Linux kernel 2.4.32 and Windows Research Kernel (WRK) with very low false positive rate and very low false negative rate. We then develop an Invariant Monitor to guard these data invariants against real-world malware. In our experiment, we are able to use Invariant Monitor to detect ten real-world Linux rootkits and nine real-world Windows malware and one synthetic Windows malware. We leverage static and dynamic analysis of kernel and device drivers to learn the legitimate KQ requests. Based on the learned KQ requests, we build KQguard to protect KQs. At runtime, KQguard rejects all the unknown KQ requests that cannot be validated. We apply KQguard on WRK and Linux kernel, and extensive experimental evaluation shows that KQguard is efficient (up to 5.6% overhead) and effective (capable of achieving zero false positives against representative benign workloads after appropriate training and very low false negatives against 125 real-world malware and nine synthetic attacks). In our system, Invariant Monitor and KQguard cooperate together to protect data invariants and KQs in the target kernel. By monitoring these integrity properties, we can detect malware by its violation of these integrity properties during execution.
Resumo:
With the increasing attention towards the role of information systems (IS) as a vehicle to address environmental issues, IS researchers and practitioners have strived to leverage advanced Green IS innovations to persuade people to engage in environmentally responsible practices and support pro-environmental initiatives. Yet, existing research reveals that the persuasion effects of Green IS designs remain equivocal. In particular, many design characteristics advocated in Green IS research can produce bi-directional changes in IS users’ attitudes and behaviours. To address this issue, this thesis drew upon the circumplex model of social values (S.H. Schwartz, 1992) to explain when and how online persuasion designs come to affect people’s judgements on resource conservation and environmental protection. Three sets of working propositions and specific hypotheses were developed. Specifically, this research suggests that the use of an IS application can elicit different value primes and draw IS users’ attentions to different motivational functions of engaging in suggested behavioural changes. It is expected that matching online persuasion appeals with IS users’ personal value priorities can increase users’ acceptance of online behavioural suggestions. Second, it is hypothesized that the persuasion effect tends to be weakened, as the system users become aware of the valuematching design in a given IS application. Third, it is proposed that different value primes presented in an IS application can result in different unintended effects on IS users’ global pro-environmental attitudes and motivations. The hypotheses were tested in the two pilot studies and two full-scale online experiments. The study findings generally support the main predictions of the hypotheses. On the one hand, this thesis providesiii empirical evidence that IS design for online persuasion can be instrumental in influencing IS users’ judgements on a range of resource conservation practices. On the other hand, this work explains why the effectiveness of IS-enabled online persuasion attempts needs to be measured not only in terms of the intended changes in a target behavioural domain but also in terms of unintended changes in people’s general environmental orientations. Findings in this research may bring a different perspective on understanding and assessing the influence of Green IS applications on IS users’ judgements and behaviou
Resumo:
This dissertation focuses on industrial policy in two developing countries: Peru and Ecuador. Informed by comparative historical analysis, it explains how the Import-Substitution Industrialization policies promoted during the 1970s by military administration unravelled in the following 30 years under the guidance of Washington Consensus policies. Positioning political economy in time, the research objectives were two-fold: understanding long-term policy reform patterns, including the variables that conditioned cyclical versus path-dependent dynamics of change and; secondly, investigating the direction and leverage of state institutions supporting the manufacturing sector at the dawn, peak and consolidation of neoliberal discourse in both countries. Three interconnected causal mechanisms explain the divergence of trajectories: institutional legacies, coordination among actors and economic distribution of power. Peru’s long tradition of a minimal state contrasts with the embedded character of Ecuador long tradition of legal protectionism dating back to the Liberal Revolution. Peru’s close policy coordination among stakeholders –state technocrats and business elites- differs from Ecuador’s “winners-take-all” approach for policy-making. Peru’s economic dynamism concentrated in Lima sharply departs from Ecuador’s competitive regional economic leaderships. This dissertation paid particular attention to methodology to understand the intersection between structure and agency in policy change. Tracing primary and secondary sources, as well as key pieces of legislation, became critical to understand key turning points and long-term patterns of change. Open-ended interviews (N=58) with two stakeholder groups (business elites and bureaucrats) compounded the effort to knit motives, discourses, and interests behind this long transition. In order to understand this amount of data, this research build an index of policy intervention as a methodological contribution to assess long patterns of policy change. These findings contribute to the current literature on State-market relations and varieties of capitalism, institutional change, and policy reform.
Resumo:
En Portugal, como en otros países, las cooperativas agrícolas tienen un papel económico importante en el sistema alimentario. Similar a otras organizaciones económicas, las cooperativas agrícolas han sido testigos de cambios estructurales en las últimas décadas en términos de modelos de gobernación y gestión. Las cooperativas agrícolas portuguesas se han visto constreñidas por su contexto a adoptar un modelo tradicional de propiedad y control. El objetivo principal de este estudio era analizar cuestiones relacionadas con la estructura de gestión y desempeño financiero de las cooperativas, basada en los datos recogidos de cooperativas de aceite de oliva situadas en la región interior norte de Portugal. La combinación de un análisis cualitativo de la estructura y toma de decisiones, una evaluación financiera y la aplicación de un enfoque en varios criterios (PROMETHEE II), los resultados están en línea con expectativas (por ejemplo, bajos niveles de participación de los miembros, gestión no profesional, ratios de rentabilidad bajos, bajo apalancamiento y una capacidad para cumplir compromisos financieros), excepto la relación entre la gestión profesional y el desempeño financiero. La existencia de gestión profesional no conduce a mejores resultados financieros. Este resultado refuerza la creencia de que las cooperativas que están estructuradas de diferente manera tienen intereses diferentes y contradictorios a las partes interesadas.
Resumo:
This talk explores how the runtime system and operating system can leverage metrics that express the significance and resilience of application components in order to reduce the energy footprint of parallel applications. We will explore in particular how software can tolerate and indeed exploit higher error rates in future processors and memory technologies that may operate outside their safe margins.