962 resultados para System complexity
Resumo:
File system security is fundamental to the security of UNIX and Linux systems since in these systems almost everything is in the form of a file. To protect the system files and other sensitive user files from unauthorized accesses, certain security schemes are chosen and used by different organizations in their computer systems. A file system security model provides a formal description of a protection system. Each security model is associated with specified security policies which focus on one or more of the security principles: confidentiality, integrity and availability. The security policy is not only about “who” can access an object, but also about “how” a subject can access an object. To enforce the security policies, each access request is checked against the specified policies to decide whether it is allowed or rejected. The current protection schemes in UNIX/Linux systems focus on the access control. Besides the basic access control scheme of the system itself, which includes permission bits, setuid and seteuid mechanism and the root, there are other protection models, such as Capabilities, Domain Type Enforcement (DTE) and Role-Based Access Control (RBAC), supported and used in certain organizations. These models protect the confidentiality of the data directly. The integrity of the data is protected indirectly by only allowing trusted users to operate on the objects. The access control decisions of these models depend on either the identity of the user or the attributes of the process the user can execute, and the attributes of the objects. Adoption of these sophisticated models has been slow; this is likely due to the enormous complexity of specifying controls over a large file system and the need for system administrators to learn a new paradigm for file protection. We propose a new security model: file system firewall. It is an adoption of the familiar network firewall protection model, used to control the data that flows between networked computers, toward file system protection. This model can support decisions of access control based on any system generated attributes about the access requests, e.g., time of day. The access control decisions are not on one entity, such as the account in traditional discretionary access control or the domain name in DTE. In file system firewall, the access decisions are made upon situations on multiple entities. A situation is programmable with predicates on the attributes of subject, object and the system. File system firewall specifies the appropriate actions on these situations. We implemented the prototype of file system firewall on SUSE Linux. Preliminary results of performance tests on the prototype indicate that the runtime overhead is acceptable. We compared file system firewall with TE in SELinux to show that firewall model can accommodate many other access control models. Finally, we show the ease of use of firewall model. When firewall system is restricted to specified part of the system, all the other resources are not affected. This enables a relatively smooth adoption. This fact and that it is a familiar model to system administrators will facilitate adoption and correct use. The user study we conducted on traditional UNIX access control, SELinux and file system firewall confirmed that. The beginner users found it easier to use and faster to learn then traditional UNIX access control scheme and SELinux.
Resumo:
Tissue engineering and regenerative medicine have emerged in an effort to generate replacement tissues capable of restoring native tissue structure and function, but because of the complexity of biologic system, this has proven to be much harder than originally anticipated. Silica based bioactive glasses are popular as biomaterials because of their ability to enhance osteogenesis and angiogenesis. Sol-gel processing methods are popular in generating these materials because it offers: 1) mild processing conditions; 2) easily controlled structure and composition; 3) the ability to incorporate biological molecules; and 4) inherent biocompatibility. The goal of this work was to develop a bioactive vaporization system for the deposition of silica sol-gel particles as a means to modify the material properties of a substrate at the nano- and micro- level to better mimic the instructive conditions of native bone tissue, promoting appropriate osteoblast attachment, proliferation, and differentiation as a means for supporting bone tissue regeneration. The size distribution, morphology and degradation behavior of the vapor deposited sol-gel particles developed here were found to be dependent upon formulation (H2O:TMOS, pH, Ca/P incorporation) and manufacturing (substrate surface character, deposition time). Additionally, deposition of these particles onto substrates can be used to modify overall substrate properties including hydrophobicity, roughness, and topography. Deposition of Ca/P sol particles induced apatite-like mineral formation on both two- and three-dimensional materials when exposed to body fluids. Gene expression analysis suggests that Ca/P sol particles induce upregulation osteoblast gene expression (Runx2, OPN, OCN) in preosteoblasts during early culture time points. Upon further modification-specifically increasing particle stability-these Ca/P sol particles possess the potential to serve as a simple and unique means to modify biomaterial surface properties as a means to direct osteoblast differentiation.
Resumo:
Virtual machines (VMs) emulating hardware devices are generally implemented in low-level languages for performance reasons. This results in unmaintainable systems that are difficult to understand. In this paper we report on our experience using the PyPy toolchain to improve the portability and reduce the complexity of whole-system VM implementations. As a case study we implement a VM prototype for a Nintendo Game Boy, called PyGirl, in which the high-level model is separated from low-level VM implementation issues. We shed light on the process of refactoring from a low-level VM implementation in Java to a high-level model in RPython. We show that our whole-system VM written with PyPy is significantly less complex than standard implementations, without substantial loss in performance.
Resumo:
As long as global CO₂ emissions continue to increase annually, long-term committed Earth system changes grow much faster than current observations. A novel metric linking this future growth to policy decisions today is the mitigation delay sensitivity (MDS), but MDS estimates for Earth system variables other than peak temperature (ΔT max) are missing. Using an Earth System Model of Intermediate Complexity, we show that the current emission increase rate causes a ΔT max increase roughly 3–7.5 times as fast as observed warming, and a millenial steric sea level rise (SSLR) 7–25 times as fast as observed SSLR, depending on the achievable rate of emission reductions after the peak of emissions. These ranges are only slightly affected by the uncertainty range in equilibrium climate sensitivity, which is included in the above values. The extent of ocean acidification at the end of the century is also strongly dependent on the starting time and rate of emission reductions. The preservable surface ocean area with sufficient aragonite supersaturation for coral reef growth is diminished globally at an MDS of roughly 25%–80% per decade. A near-complete loss of this area becomes unavoidable if mitigation is delayed for a few years to decades. Also with respect to aragonite, 12%–18% of the Southern Ocean surface become undersaturated per decade, if emission reductions are delayed beyond 2015–2040. We conclude that the consequences of delaying global emission reductions are much better captured if the MDS of relevant Earth system variables is communicated in addition to current trends and total projected future changes.
Resumo:
Vertical integration is grounded in economic theory as a corporate strategy for reducing cost and enhancing efficiency. There were three purposes for this dissertation. The first was to describe and understand vertical integration theory. The review of the economic theory established vertical integration as a corporate cost reduction strategy in response to environmental, structural and performance dimensions of the market. The second purpose was to examine vertical integration in the context of the health care industry, which has greater complexity, higher instability, and more unstable demand than other industries, although many of the same dimensions of the market supported a vertical integration strategy. Evidence on the performance of health systems after integration revealed mixed results. Because the market continues to be turbulent, hybrid non-owned integration in the form of alliances have increased to over 40% of urban hospitals. The third purpose of the study was to examine the application of vertical integration in health care and evaluate the effects. The case studied was an alliance formed between a community hospital and a tertiary medical center to facilitate vertical integration of oncology services while maintaining effectiveness and preserving access. The economic benefits for 1934 patients were evaluated in the delivery system before and after integration with a more detailed economic analysis of breast, lung, colon/rectal, and non-malignant cases. A regression analysis confirmed the relationship between the independent variables of age, sex, location of services, race, stage of disease, and diagnosis, and the dependent variable, cost. The results of the basic regression model, as well as the regression with first-order interaction terms, were statistically significant. The study shows that vertical integration at an intermediate health care system level has economic benefits. If the pre-integration oncology group had been treated in the post-integration model, the expected cost savings from integration would be 31.5%. Quality indicators used were access to health care services and research treatment protocols, and access was preserved in the integrated model. Using survival as a direct quality outcome measure, the survival of lung cancer patients was statistically the same before and after integration. ^
Resumo:
The statistical distributions of different software properties have been thoroughly studied in the past, including software size, complexity and the number of defects. In the case of object-oriented systems, these distributions have been found to obey a power law, a common statistical distribution also found in many other fields. However, we have found that for some statistical properties, the behavior does not entirely follow a power law, but a mixture between a lognormal and a power law distribution. Our study is based on the Qualitas Corpus, a large compendium of diverse Java-based software projects. We have measured the Chidamber and Kemerer metrics suite for every file of every Java project in the corpus. Our results show that the range of high values for the different metrics follows a power law distribution, whereas the rest of the range follows a lognormal distribution. This is a pattern typical of so-called double Pareto distributions, also found in empirical studies for other software properties.
Resumo:
Diffusion controls the gaseous transport process in soils when advective transport is almost null. Knowledge of the soil structure and pore connectivity are critical issues to understand and modelling soil aeration, sequestration or emission of greenhouse gasses, volatilization of volatile organic chemicals among other phenomena. In the last decades these issues increased our attention as scientist have realize that soil is one of the most complex materials on the earth, within which many biological, physical and chemical processes that support life and affect climate change take place. A quantitative and explicit characterization of soil structure is difficult because of the complexity of the pore space. This is the main reason why most theoretical approaches to soil porosity are idealizations to simplify this system. In this work, we proposed a more realistic attempt to capture the complexity of the system developing a model that considers the size and location of pores in order to relate them into a network. In the model we interpret porous soils as heterogeneous networks where pores are represented by nodes, characterized by their size and spatial location, and the links representing flows between them. In this work we perform an analysis of the community structure of porous media of soils represented as networks. For different real soils samples, modelled as heterogeneous complex networks, spatial communities of pores have been detected depending on the values of the parameters of the porous soil model used. These types of models are named as Heterogeneous Preferential Attachment (HPA). Developing an exhaustive analysis of the model, analytical solutions are obtained for the degree densities and degree distribution of the pore networks generated by the model in the thermodynamic limit and shown that the networks exhibit similar properties to those observed in other complex networks. With the aim to study in more detail topological properties of these networks, the presence of soil pore community structures is studied. The detection of communities of pores, as groups densely connected with only sparser connections between groups, could contribute to understand the mechanisms of the diffusion phenomena in soils.
Resumo:
Decreasing the accidents on highway and urban environments is the main motivation for the research and developing of driving assistance systems, also called ADAS (Advanced Driver Assistance Systems). In recent years, there are many applications of these systems in commercial vehicles: ABS systems, Cruise Control (CC), parking assistance and warning systems (including GPS), among others. However, the implementation of driving assistance systems on the steering wheel is more limited, because of their complexity and sensitivity. This paper is focused in the development, test and implementation of a driver assistance system for controlling the steering wheel in curve zones. This system is divided in two levels: an inner control loop which permits to execute the position and speed target, softening the action over the steering wheel, and a second control outer loop (controlling for fuzzy logic) that sends the reference to the inner loop according the environment and vehicle conditions. The tests have been done in different curves and speeds. The system has been proved in a commercial vehicle with satisfactory results.
Resumo:
El audio multicanal ha avanzado a pasos agigantados en los últimos años, y no solo en las técnicas de reproducción, sino que en las de capitación también. Por eso en este proyecto se encuentran ambas cosas: un array microfónico, EigenMike32 de MH Acoustics, y un sistema de reproducción con tecnología Wave Field Synthesis, instalado Iosono en la Jade Höchscule Oldenburg. Para enlazar estos dos puntos de la cadena de audio se proponen dos tipos distintos de codificación: la reproducción de la toma horizontal del EigenMike32; y el 3er orden de Ambisonics (High Order Ambisonics, HOA), una técnica de codificación basada en Armónicos Esféricos mediante la cual se simula el campo acústico en vez de simular las distintas fuentes. Ambas se desarrollaron en el entorno Matlab y apoyadas por la colección de scripts de Isophonics llamada Spatial Audio Matlab Toolbox. Para probar éstas se llevaron a cabo una serie de test en los que se las comparó con las grabaciones realizadas a la vez con un Dummy Head, a la que se supone el método más aproximado a nuestro modo de escucha. Estas pruebas incluían otras grabaciones hechas con un Doble MS de Schoeps que se explican en el proyecto “Sally”. La forma de realizar éstas fue, una batería de 4 audios repetida 4 veces para cada una de las situaciones garbadas (una conversación, una clase, una calle y un comedor universitario). Los resultados fueron inesperados, ya que la codificación del tercer orden de HOA quedo por debajo de la valoración Buena, posiblemente debido a la introducción de material hecho para un array tridimensional dentro de uno de 2 dimensiones. Por el otro lado, la codificación que consistía en extraer los micrófonos del plano horizontal se mantuvo en el nivel de Buena en todas las situaciones. Se concluye que HOA debe seguir siendo probado con mayores conocimientos sobre Armónicos Esféricos; mientras que el otro codificador, mucho más sencillo, puede ser usado para situaciones sin mucha complejidad en cuanto a espacialidad. In the last years the multichannel audio has increased in leaps and bounds and not only in the playback techniques, but also in the recording ones. That is the reason of both things being in this project: a microphone array, EigenMike32 from MH Acoustics; and a playback system with Wave Field Synthesis technology, installed by Iosono in Jade Höchscule Oldenburg. To link these two points of the audio chain, 2 different kinds of codification are proposed: the reproduction of the EigenMike32´s horizontal take, and the Ambisonics´ third order (High Order Ambisonics, HOA), a codification technique based in Spherical Harmonics through which the acoustic field is simulated instead of the different sound sources. Both have been developed inside Matlab´s environment and supported by the Isophonics´ scripts collection called Spatial Audio Matlab Toolbox. To test these, a serial of tests were made in which they were compared with recordings made at the time by a Dummy Head, which is supposed to be the closest method to our hearing way. These tests included other recording and codifications made by a Double MS (DMS) from Schoeps which are explained in the project named “3D audio rendering through Ambisonics techniques: from multi-microphone recordings (DMS Schoeps) to a WFS system, through Matlab”. The way to perform the tests was, a collection made of 4 audios repeated 4 times for each recorded situation (a chat, a class, a street and college canteen or Mensa). The results were unexpected, because the HOA´s third order stood under the Well valuation, possibly caused by introducing material made for a tridimensional array inside one made only by 2 dimensions. On the other hand, the codification that consisted of extracting the horizontal plane microphones kept the Well valuation in all the situations. It is concluded that HOA should keep being tested with larger knowledge about Spherical Harmonics; while the other coder, quite simpler, can be used for situations without a lot of complexity with regards to spatiality.
Resumo:
Soil is well recognized as a highly complex system. The interaction and coupled physical, chemical, and biological processes and phenomena occurring in the soil environment at different spatial and temporal scales are the main reasons for such complexity. There is a need for appropriate methodologies to characterize soil porous systems with an interdisciplinary character. Four different real soil samples, presenting different textures, have been modeled as heterogeneous complex networks, applying a model known as the heterogeneous preferential attachment. An analytical study of the degree distributions in the soil model shows a multiscaling behavior in the connectivity degrees, leaving an empirically testable signature of heterogeneity in the topology of soil pore networks. We also show that the power-law scaling in the degree distribution is a robust trait of the soil model. Last, the detection of spatial pore communities, as densely connected groups with only sparser connections between them, has been studied for the first time in these soil networks. Our results show that the presence of these communities depends on the parameter values used to construct the network. These findings could contribute to understanding the mechanisms of the diffusion phenomena in soils, such as gas and water diffusion, development and dynamics of microorganisms, among others.
Resumo:
Over the last decade, Grid computing paved the way for a new level of large scale distributed systems. This infrastructure made it possible to securely and reliably take advantage of widely separated computational resources that are part of several different organizations. Resources can be incorporated to the Grid, building a theoretical virtual supercomputer. In time, cloud computing emerged as a new type of large scale distributed system, inheriting and expanding the expertise and knowledge that have been obtained so far. Some of the main characteristics of Grids naturally evolved into clouds, others were modified and adapted and others were simply discarded or postponed. Regardless of these technical specifics, both Grids and clouds together can be considered as one of the most important advances in large scale distributed computing of the past ten years; however, this step in distributed computing has came along with a completely new level of complexity. Grid and cloud management mechanisms play a key role, and correct analysis and understanding of the system behavior are needed. Large scale distributed systems must be able to self-manage, incorporating autonomic features capable of controlling and optimizing all resources and services. Traditional distributed computing management mechanisms analyze each resource separately and adjust specific parameters of each one of them. When trying to adapt the same procedures to Grid and cloud computing, the vast complexity of these systems can make this task extremely complicated. But large scale distributed systems complexity could only be a matter of perspective. It could be possible to understand the Grid or cloud behavior as a single entity, instead of a set of resources. This abstraction could provide a different understanding of the system, describing large scale behavior and global events that probably would not be detected analyzing each resource separately. In this work we define a theoretical framework that combines both ideas, multiple resources and single entity, to develop large scale distributed systems management techniques aimed at system performance optimization, increased dependability and Quality of Service (QoS). The resulting synergy could be the key 350 J. Montes et al. to address the most important difficulties of Grid and cloud management.
Resumo:
A unified low complexity sign-bit correlation based symbol timing synchronization scheme for Multiband Orthogonal Frequency Division Multiplexing (MB-OFDM) Ultra Wideband (UWB) receiver system is proposed. By using the time domain sequence of the packet/frame synchronization preamble, the proposed scheme is in charge of detecting the upcoming MB-OFDM symbol and it estimates the exact boundary of the start of Fast Fourier Transform (FFT) window. The proposed algorithm is implemented by using an efficient Hardware-Software co-simulation methodology. The effectiveness of the proposed synchronization scheme and the optimization criteria is confirmed by hardware implementation results.
Resumo:
Today's motivation for autonomous systems research stems out of the fact that networked environments have reached a level of complexity and heterogeneity that make their control and management by solely human administrators more and more difficult. The optimisation of performance metrics for the air traffic management system, like in other networked system, has become more complex with increasing number of flights, capacity constraints, environmental factors and safety regulations. It is anticipated that a new structure of planning layers and the introduction of higher levels of automation will reduce complexity and will optimise the performance metrics of the air traffic management system. This paper discusses the complexity of optimising air traffic management performance metrics and proposes a way forward based on higher levels of automation.
Resumo:
One medium-term strategy for helping in the management of complexity is the introduction of a conceptual complexity component in the very centre of university curricula. In very few areas is the growth of complexity as evident as in the information technologies (ITs), the focus of the work presented in the current paper. We have therefore developed an integrated way of tackling the specific field of information technologies by means of an approach,to complexity. The content of this paper describes the guidelines of our research effort, placing an emphasis on informatics. Concepts of complexity based on the system metaphor have been substantially drawn upon in this exercise and are thus presented in some detail. Also described is a didactic experiment conducted by the author and designed to provide a new and integrating approach to University curricula for future professionals. The students' "discovery" of complexity is the focal point of the experiment. The findings of this effort are encouraging and call for the continuation and expansion of this experiment.