953 resultados para component isolation, system call interpositioning, hardware virtualization, application isolation
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
The search for patterns or motifs in data represents an area of key interest to many researchers. In this paper we present the Motif Tracking Algorithm, a novel immune inspired pattern identification tool that is able to identify unknown motifs which repeat within time series data. The power of the algorithm is derived from its use of a small number of parameters with minimal assumptions. The algorithm searches from a completely neutral perspective that is independent of the data being analysed and the underlying motifs. In this paper the motif tracking algorithm is applied to the search for patterns within sequences of low level system calls between the Linux kernel and the operating system’s user space. The MTA is able to compress data found in large system call data sets to a limited number of motifs which summarise that data. The motifs provide a resource from which a profile of executed processes can be built. The potential for these profiles and new implications for security research are highlighted. A higher level system call language for measuring similarity between patterns of such calls is also suggested.
Resumo:
Prediction of carbohydrate fractions using equations from the Cornell Net Carbohydrate and Protein System (CNCPS) is a valuable tool to assess the nutritional value of forages. In this paper these carbohydrate fractions were predicted using data from three sunflower (Helianthus annuus L.) cultivars, fresh or as silage. The CNCPS equations for fractions B(2) and C include measurement of ash and protein-free neutral detergent fibre (NDF) as one of their components. However, NDF lacks pectin and other non-starch polysaccharides that are found in the cell wall (CW) matrix, so this work compared the use of a crude CW preparation instead of NDF in the CNCPS equations. There were no differences in the estimates of fractions B, and C when CW replaced NDF; however there were differences in fractions A and B2. Some of the CNCPS equations could be simplified when using CW instead of NDF Notably, lignin could be expressed as a proportion of DM, rather than on the basis of ash and protein-free NDF, when predicting CNCPS fraction C. The CNCPS fraction B(1) (starch + pectin) values were lower than pectin determined through wet chemistty. This finding, along with the results obtained by the substitution of CW for NDF in the CNCPS equations, suggests that pectin was not part of fraction B(1) but present in fraction A. We suggest that pectin and other non-starch polysaccharides that are dissolved by the neutral detergent solution be allocated to a specific fraction (B2) and that another fraction (B(3)) be adopted for the digestible cell wall carbohydrates.
Resumo:
Despite its widespread use, the Coale-Demeny model life table system does not capture the extensive variation in age-specific mortality patterns observed in contemporary populations, particularly those of the countries of Eastern Europe and populations affected by HIV/AIDS. Although relational mortality models, such as the Brass logit system, can identify these variations, these models show systematic bias in their predictive ability as mortality levels depart from the standard. We propose a modification of the two-parameter Brass relational model. The modified model incorporates two additional age-specific correction factors (gamma(x), and theta(x)) based on mortality levels among children and adults, relative to the standard. Tests of predictive validity show deviations in age-specific mortality rates predicted by the proposed system to be 30-50 per cent lower than those predicted by the Coale-Demeny system and 15-40 per cent lower than those predicted using the original Brass system. The modified logit system is a two-parameter system, parameterized using values of l(5) and l(60).
Resumo:
Dissertação para obtenção do Grau de Mestre em Engenharia Informática
Resumo:
Report for the scientific sojourn carried out at the l’ Institute for Computational Molecular Science of the Temple University, United States, from 2010 to 2012. Two-component systems (TCS) are used by pathogenic bacteria to sense the environment within a host and activate mechanisms related to virulence and antimicrobial resistance. A prototypical example is the PhoQ/PhoP system, which is the major regulator of virulence in Salmonella. Hence, PhoQ is an attractive target for the design of new antibiotics against foodborne diseases. Inhibition of the PhoQ-mediated bacterial virulence does not result in growth inhibition, presenting less selective pressure for the generation of antibiotic resistance. Moreover, PhoQ is a histidine kinase (HK) and it is absent in animals. Nevertheless, the design of satisfactory HK inhibitors has been proven to be a challenge. To compete with the intracellular ATP concentrations, the affinity of a HK inhibidor must be in the micromolar-nanomolar range, whereas the current lead compounds have at best millimolar affinities. Moreover, the drug selectivity depends on the conformation of a highly variable loop, referred to as the “ATP-lid, which is difficult to study by X-Ray crystallography due to its flexibility. I have investigated the binding of different HK inhibitors to PhoQ. In particular, all-atom molecular dynamics simulations have been combined with enhanced sampling techniques in order to provide structural and dynamic information of the conformation of the ATP-lid. Transient interactions between these drugs and the ATP-lid have been identified and the free energy of the different binding modes has been estimated. The results obtained pinpoint the importance of protein flexibility in the HK-inhibitor binding, and constitute a first step in developing more potent and selective drugs. The computational resources of the hosting institution as well as the experience of the members of the group in drug binding and free energy methods have been crucial to carry out this work.
Resumo:
A novel two-component system, CbrA-CbrB, was discovered in Pseudomonas aeruginosa; cbrA and cbrB mutants of strain PAO were found to be unable to use several amino acids (such as arginine, histidine and proline), polyamines and agmatine as sole carbon and nitrogen sources. These mutants were also unable to use, or used poorly, many other carbon sources, including mannitol, glucose, pyruvate and citrate. A 7 kb EcoRI fragment carrying the cbrA and cbrB genes was cloned and sequenced. The cbrA and cbrB genes encode a sensor/histidine kinase (Mr 108 379, 983 residues) and a cognate response regulator (Mr 52 254, 478 residues) respectively. The amino-terminal half (490 residues) of CbrA appears to be a sensor membrane domain, as predicted by 12 possible transmembrane helices, whereas the carboxy-terminal part shares homology with the histidine kinases of the NtrB family. The CbrB response regulator shows similarity to the NtrC family members. Complementation and primer extension experiments indicated that cbrA and cbrB are transcribed from separate promoters. In cbrA or cbrB mutants, as well as in the allelic argR9901 and argR9902 mutants, the aot-argR operon was not induced by arginine, indicating an essential role for this two-component system in the expression of the ArgR-dependent catabolic pathways, including the aruCFGDB operon specifying the major aerobic arginine catabolic pathway. The histidine catabolic enzyme histidase was not expressed in cbrAB mutants, even in the presence of histidine. In contrast, proline dehydrogenase, responsible for proline utilization (Pru), was expressed in a cbrB mutant at a level comparable with that of the wild-type strain. When succinate or other C4-dicarboxylates were added to proline medium at 1 mM, the cbrB mutant was restored to a Pru+ phenotype. Such a succinate-dependent Pru+ property was almost abolished by 20 mM ammonia. In conclusion, the CbrA-CbrB system controls the expression of several catabolic pathways and, perhaps together with the NtrB-NtrC system, appears to ensure the intracellular carbon: nitrogen balance in P. aeruginosa.
Resumo:
Modern computer systems are plagued with stability and security problems: applications lose data, web servers are hacked, and systems crash under heavy load. Many of these problems or anomalies arise from rare program behavior caused by attacks or errors. A substantial percentage of the web-based attacks are due to buffer overflows. Many methods have been devised to detect and prevent anomalous situations that arise from buffer overflows. The current state-of-art of anomaly detection systems is relatively primitive and mainly depend on static code checking to take care of buffer overflow attacks. For protection, Stack Guards and I-leap Guards are also used in wide varieties.This dissertation proposes an anomaly detection system, based on frequencies of system calls in the system call trace. System call traces represented as frequency sequences are profiled using sequence sets. A sequence set is identified by the starting sequence and frequencies of specific system calls. The deviations of the current input sequence from the corresponding normal profile in the frequency pattern of system calls is computed and expressed as an anomaly score. A simple Bayesian model is used for an accurate detection.Experimental results are reported which show that frequency of system calls represented using sequence sets, captures the normal behavior of programs under normal conditions of usage. This captured behavior allows the system to detect anomalies with a low rate of false positives. Data are presented which show that Bayesian Network on frequency variations responds effectively to induced buffer overflows. It can also help administrators to detect deviations in program flow introduced due to errors.
Resumo:
This paper discusses our research in developing a generalized and systematic method for anomaly detection. The key ideas are to represent normal program behaviour using system call frequencies and to incorporate probabilistic techniques for classification to detect anomalies and intrusions. Using experiments on the sendmail system call data, we demonstrate that concise and accurate classifiers can be constructed to detect anomalies. An overview of the approach that we have implemented is provided.
Resumo:
The DcuS-DcuR system of Escherichia coli is a two-component sensor-regulator that controls gene expression in response to external C-4-dicarboxylates and citrate. The DcuS protein is particularly interesting since it contains two PAS domains, namely a periplasmic C-4-dicarboxylate-sensing PAS domain (PASp) and a cytosolic PAS domain (PASc) of uncertain function. For a study of the role of the PASc domain, three different fragments of DcuS were overproduced and examined: they were PASc-kinase, PASc, and kinase. The two kinase-domain-containing fragments were autophosphorylated by [gamma-P-32]ATP. The rate was not affected by fumarate or succinate, supporting the role of the PASp domain in C-4-dicarboxylate sensing. Both of the phosphorylated DcuS constructs were able to rapidly pass their phosphoryl groups to DcuR, and after phosphorylation, DcuR dephosphorylated rapidly. No prosthetic group or significant quantity of metal was found associated with either of the PASc-containing proteins. The DNA-binding specificity of DcuR was studied by use of the pure protein. It was found to be converted from a monomer to a dimer upon acetylphosphate treatment, and native polyacrylamide gel electrophoresis suggested that it can oligomerize. DcuR specifically bound to the promoters of the three known DcuSR-regulated genes (dctA, dcuB, and frdA), with apparent K(D)s of 6 to 32 muM for untreated DcuR and less than or equal to1 to 2 muM for the acetylphosphate-treated form. The binding sites were located by DNase I footprinting, allowing a putative DcuR-binding motif [tandemly repeated (T/A)(A/T)(T/C)(A/T)AA sequences] to be identified. The DcuR-binding sites of the dcuB, dctA, and frdA genes were located 27, 94, and 86 bp, respectively, upstream of the corresponding +1 sites, and a new promoter was identified for dcuB that responds to DcuR.
Resumo:
The simple gas ethylene affects numerous physiological processes in the growth and development of higher plants. With the use of molecular genetic approaches, we are beginning to learn how plants perceive ethylene and how this signal is transduced. Components of ethylene signal transduction are defined by ethylene response mutants in Arabidopsis thaliana. The genes corresponding to two of these mutants, etr1 and etr1, have been cloned. The ETR1 gene encodes a homolog of two-component regulators that are known almost exclusively in prokaryotes. The two-component regulators in prokaryotes are involved in the perception and transduction of a wide range of environmental signals leading to adaptive responses. The CTR1 gene encodes a homolog of the Raf family of serine/threonine protein kinases. Raf is part of a mitogen-activated protein kinase cascade known to regulate cell growth and development in mammals, worms, and flies. The ethylene response pathway may, therefore, exemplify a conserved protein kinase cascade regulated by a two-component system. The dominance of all known mutant alleles of ETR1 may be due to either constitutive activation of the ETR1 protein or dominant interference of wild-type activity. The discovery of Arabidopsis genes encoding proteins related to ETR1 suggests that the failure to recover recessive etr1 mutant alleles may be due to the presence of redundant genes.
Resumo:
An international standard, ISO/DP 9459-4 has been proposed to establish a uniform standard of quality for small, factory-made solar heating systerns. In this proposal, system components are tested separatelyand total system performance is calculated using system simulations based on component model parameter values validated using the results from the component tests. Another approach is to test the whole system in operation under representative conditions, where the results can be used as a measure of the general system performance. The advantage of system testing of this form is that it is not dependent on simulations and the possible inaccuracies of the models. Its disadvantage is that it is restricted to the boundary conditions for the test. Component testing and system simulation is flexible, but requires an accurate and reliable simulation model.The heat store is a key component conceming system performance. Thus, this work focuses on the storage system consisting store, electrical auxiliary heater, heat exchangers and tempering valve. Four different storage system configurations with a volume of 750 litre were tested in an indoor system test using a six -day test sequence. A store component test and system simulation was carried out on one of the four configurations, applying the proposed standard for stores, ISO/DP 9459-4A. Three newly developed test sequences for intemalload side heat exchangers, not in the proposed ISO standard, were also carried out. The MULTIPORT store model was used for this work. This paper discusses the results of the indoor system test, the store component test, the validation of the store model parameter values and the system simulations.
Resumo:
The technology of partial virtualization is a revolutionary approach to the world of virtualization. It lies directly in-between full system virtual machines (like QEMU or XEN) and application-related virtual machines (like the JVM or the CLR). The ViewOS project is the flagship of such technique, developed by the Virtual Square laboratory, created to provide an abstract view of the underlying system resources on a per-process basis and work against the principle of the Global View Assumption. Virtual Square provides several different methods to achieve partial virtualization within the ViewOS system, both at user and kernel levels. Each of these approaches have their own advantages and shortcomings. This paper provides an analysis of the different virtualization methods and problems related to both the generic and partial virtualization worlds. This paper is the result of an in-depth study and research for a new technology to be employed to provide partial virtualization based on ELF dynamic binaries. It starts with a mild analysis of currently available virtualization alternatives and then goes on describing the ViewOS system, highlighting its current shortcomings. The vloader project is then proposed as a possible solution to some of these inconveniences with a working proof of concept and examples to outline the potential of such new virtualization technique. By injecting specific code and libraries in the middle of the binary loading mechanism provided by the ELF standard, the vloader project can promote a streamlined and simplified approach to trace system calls. With the advantages outlined in the following paper, this method presents better performance and portability compared to the currently available ViewOS implementations. Furthermore, some of itsdisadvantages are also discussed, along with their possible solutions.
Resumo:
Mixed criticality systems emerges as a suitable solution for dealing with the complexity, performance and costs of future embedded and dependable systems. However, this paradigm adds additional complexity to their development. This paper proposes an approach for dealing with this scenario that relies on hardware virtualization and Model-Driven Engineering (MDE). Hardware virtualization ensures isolation between subsystems with different criticality levels. MDE is intended to bridge the gap between design issues and partitioning concerns. MDE tooling will enhance the functional models by annotating partitioning and extra-functional properties. System partitioning and subsystems allocation will be generated with a high degree of automation. System configuration will be validated for ensuring that the resources assigned to a partition are sufficient for executing the allocated software components and that time requirements are met.
Resumo:
With the ever growing trend of smart phones and tablets, Android is becoming more and more popular everyday. With more than one billion active users i to date, Android is the leading technology in smart phone arena. In addition to that, Android also runs on Android TV, Android smart watches and cars. Therefore, in recent years, Android applications have become one of the major development sectors in software industry. As of mid 2013, the number of published applications on Google Play had exceeded one million and the cumulative number of downloads was more than 50 billionii. A 2013 survey also revealed that 71% of the mobile application developers work on developing Android applicationsiii. Considering this size of Android applications, it is quite evident that people rely on these applications on a daily basis for the completion of simple tasks like keeping track of weather to rather complex tasks like managing one’s bank accounts. Hence, like every other kind of code, Android code also needs to be verified in order to work properly and achieve a certain confidence level. Because of the gigantic size of the number of applications, it becomes really hard to manually test Android applications specially when it has to be verified for various versions of the OS and also, various device configurations such as different screen sizes and different hardware availability. Hence, recently there has been a lot of work on developing different testing methods for Android applications in Computer Science fraternity. The model of Android attracts researchers because of its open source nature. It makes the whole research model more streamlined when the code for both, application and the platform are readily available to analyze. And hence, there has been a great deal of research in testing and static analysis of Android applications. A great deal of this research has been focused on the input test generation for Android applications. Hence, there are a several testing tools available now, which focus on automatic generation of test cases for Android applications. These tools differ with one another on the basis of their strategies and heuristics used for this generation of test cases. But there is still very little work done on the comparison of these testing tools and the strategies they use. Recently, some research work has been carried outiv in this regard that compared the performance of various available tools with respect to their respective code coverage, fault detection, ability to work on multiple platforms and their ease of use. It was done, by running these tools on a total of 60 real world Android applications. The results of this research showed that although effective, these strategies being used by the tools, also face limitations and hence, have room for improvement. The purpose of this thesis is to extend this research into a more specific and attribute-‐ oriented way. Attributes refer to the tasks that can be completed using the Android platform. It can be anything ranging from a basic system call for receiving an SMS to more complex tasks like sending the user to another application from the current one. The idea is to develop a benchmark for Android testing tools, which is based on the performance related to these attributes. This will allow the comparison of these tools with respect to these attributes. For example, if there is an application that plays some audio file, will the testing tool be able to generate a test input that will warrant the execution of this audio file? Using multiple applications using different attributes, it can be visualized that which testing tool is more useful for which kinds of attributes. In this thesis, it was decided that 9 attributes covering the basic nature of tasks, will be targeted for the assessment of three testing tools. Later this can be done for much more attributes to compare even more testing tools. The aim of this work is to show that this approach is effective and can be used on a much larger scale. One of the flagship features of this work, which also differentiates it with the previous work, is that the applications used, are all specially made for this research. The reason for doing that is to analyze just that specific attribute in isolation, which the application is focused on, and not allow the tool to get bottlenecked by something trivial, which is not the main attribute under testing. This means 9 applications, each focused on one specific attribute. The main contributions of this thesis are: A summary of the three existing testing tools and their respective techniques for automatic test input generation of Android Applications. • A detailed study of the usage of these testing tools using the 9 applications specially designed and developed for this study. • The analysis of the obtained results of the study carried out. And a comparison of the performance of the selected tools.