955 resultados para System-Level Models
Resumo:
Storage is a central part of computing. Driven by exponentially increasing content generation rate and a widening performance gap between memory and secondary storage, researchers are in the perennial quest to push for further innovation. This has resulted in novel ways to "squeeze" more capacity and performance out of current and emerging storage technology. Adding intelligence and leveraging new types of storage devices has opened the door to a whole new class of optimizations to save cost, improve performance, and reduce energy consumption. In this dissertation, we first develop, analyze, and evaluate three storage extensions. Our first extension tracks application access patterns and writes data in the way individual applications most commonly access it to benefit from the sequential throughput of disks. Our second extension uses a lower power flash device as a cache to save energy and turn off the disk during idle periods. Our third extension is designed to leverage the characteristics of both disks and solid state devices by placing data in the most appropriate device to improve performance and save power. In developing these systems, we learned that extending the storage stack is a complex process. Implementing new ideas incurs a prolonged and cumbersome development process and requires developers to have advanced knowledge of the entire system to ensure that extensions accomplish their goal without compromising data recoverability. Futhermore, storage administrators are often reluctant to deploy specific storage extensions without understanding how they interact with other extensions and if the extension ultimately achieves the intended goal. We address these challenges by using a combination of approaches. First, we simplify the storage extension development process with system-level infrastructure that implements core functionality commonly needed for storage extension development. Second, we develop a formal theory to assist administrators deploy storage extensions while guaranteeing that the given high level goals are satisfied. There are, however, some cases for which our theory is inconclusive. For such scenarios we present an experimental methodology that allows administrators to pick an extension that performs best for a given workload. Our evaluation demostrates the benefits of both the infrastructure and the formal theory.
Resumo:
Catering to society's demand for high performance computing, billions of transistors are now integrated on IC chips to deliver unprecedented performances. With increasing transistor density, the power consumption/density is growing exponentially. The increasing power consumption directly translates to the high chip temperature, which not only raises the packaging/cooling costs, but also degrades the performance/reliability and life span of the computing systems. Moreover, high chip temperature also greatly increases the leakage power consumption, which is becoming more and more significant with the continuous scaling of the transistor size. As the semiconductor industry continues to evolve, power and thermal challenges have become the most critical challenges in the design of new generations of computing systems. ^ In this dissertation, we addressed the power/thermal issues from the system-level perspective. Specifically, we sought to employ real-time scheduling methods to optimize the power/thermal efficiency of the real-time computing systems, with leakage/ temperature dependency taken into consideration. In our research, we first explored the fundamental principles on how to employ dynamic voltage scaling (DVS) techniques to reduce the peak operating temperature when running a real-time application on a single core platform. We further proposed a novel real-time scheduling method, “M-Oscillations” to reduce the peak temperature when scheduling a hard real-time periodic task set. We also developed three checking methods to guarantee the feasibility of a periodic real-time schedule under peak temperature constraint. We further extended our research from single core platform to multi-core platform. We investigated the energy estimation problem on the multi-core platforms and developed a light weight and accurate method to calculate the energy consumption for a given voltage schedule on a multi-core platform. Finally, we concluded the dissertation with elaborated discussions of future extensions of our research. ^
Resumo:
For the past several decades, we have experienced the tremendous growth, in both scale and scope, of real-time embedded systems, thanks largely to the advances in IC technology. However, the traditional approach to get performance boost by increasing CPU frequency has been a way of past. Researchers from both industry and academia are turning their focus to multi-core architectures for continuous improvement of computing performance. In our research, we seek to develop efficient scheduling algorithms and analysis methods in the design of real-time embedded systems on multi-core platforms. Real-time systems are the ones with the response time as critical as the logical correctness of computational results. In addition, a variety of stringent constraints such as power/energy consumption, peak temperature and reliability are also imposed to these systems. Therefore, real-time scheduling plays a critical role in design of such computing systems at the system level. We started our research by addressing timing constraints for real-time applications on multi-core platforms, and developed both partitioned and semi-partitioned scheduling algorithms to schedule fixed priority, periodic, and hard real-time tasks on multi-core platforms. Then we extended our research by taking temperature constraints into consideration. We developed a closed-form solution to capture temperature dynamics for a given periodic voltage schedule on multi-core platforms, and also developed three methods to check the feasibility of a periodic real-time schedule under peak temperature constraint. We further extended our research by incorporating the power/energy constraint with thermal awareness into our research problem. We investigated the energy estimation problem on multi-core platforms, and developed a computation efficient method to calculate the energy consumption for a given voltage schedule on a multi-core platform. In this dissertation, we present our research in details and demonstrate the effectiveness and efficiency of our approaches with extensive experimental results.
Resumo:
In this thesis, we proposed the use of device-to-device (D2D) communications for extending the coverage area of active base stations, for public safety communications with partial coverage. A 3GPP standard compliant D2D system level simulator is developed for HetNets and public safety scenarios and used to evaluate the performance of D2D discovery and communications underlying cellular networks. For D2D discovery, the benefits of time-domain inter-cell interference coordi- nation (ICIC) approaches by using almost blank subframes were evaluated. Also, the use of multi-hop is proposed to improve, even further, the performance of the D2D discovery process. Finally, the possibility of using multi-hop D2D communications for extending the coverage area of active base stations was evaluated. Improvements in energy and spectral efficiency, when compared with the case of direct UE-eNB communi- cations, were demonstrated. Moreover, UE power control techniques were applied to reduce the effects of interference from neighboring D2D links.
Resumo:
Today, smart-phones have revolutionized wireless communication industry towards an era of mobile data. To cater for the ever increasing data traffic demand, it is of utmost importance to have more spectrum resources whereby sharing under-utilized spectrum bands is an effective solution. In particular, the 4G broadband Long Term Evolution (LTE) technology and its foreseen 5G successor will benefit immensely if their operation can be extended to the under-utilized unlicensed spectrum. In this thesis, first we analyze WiFi 802.11n and LTE coexistence performance in the unlicensed spectrum considering multi-layer cell layouts through system level simulations. We consider a time division duplexing (TDD)-LTE system with an FTP traffic model for performance evaluation. Simulation results show that WiFi performance is more vulnerable to LTE interference, while LTE performance is degraded only slightly. Based on the initial findings, we propose a Q-Learning based dynamic duty cycle selection technique for configuring LTE transmission gaps, so that a satisfactory throughput is maintained both for LTE and WiFi systems. Simulation results show that the proposed approach can enhance the overall capacity performance by 19% and WiFi capacity performance by 77%, hence enabling effective coexistence of LTE and WiFi systems in the unlicensed band.
Resumo:
The increasing demand for Internet data traffic in wireless broadband access networks requires both the development of efficient, novel wireless broadband access technologies and the allocation of new spectrum bands for that purpose. The introduction of a great number of small cells in cellular networks allied to the complimentary adoption of Wireless Local Area Network (WLAN) technologies in unlicensed spectrum is one of the most promising concepts to attend this demand. One alternative is the aggregation of Industrial, Science and Medical (ISM) unlicensed spectrum to licensed bands, using wireless networks defined by Institute of Electrical and Electronics Engineers (IEEE) and Third Generation Partnership Project (3GPP). While IEEE 802.11 (Wi-Fi) networks are aggregated to Long Term Evolution (LTE) small cells via LTE / WLAN Aggregation (LWA), in proposals like Unlicensed LTE (LTE-U) and LWA the LTE air interface itself is used for transmission on the unlicensed band. Wi-Fi technology is widespread and operates in the same 5 GHz ISM spectrum bands as the LTE proposals, which may bring performance decrease due to the coexistence of both technologies in the same spectrum bands. Besides, there is the need to improve Wi-Fi operation to support scenarios with a large number of neighbor Overlapping Basic Subscriber Set (OBSS) networks, with a large number of Wi-Fi nodes (i.e. dense deployments). It is long known that the overall Wi-Fi performance falls sharply with the increase of Wi-Fi nodes sharing the channel, therefore there is the need for introducing mechanisms to increase its spectral efficiency. This work is dedicated to the study of coexistence between different wireless broadband access systems operating in the same unlicensed spectrum bands, and how to solve the coexistence problems via distributed coordination mechanisms. The problem of coexistence between different networks (i.e. LTE and Wi-Fi) and the problem of coexistence between different networks of the same technology (i.e. multiple Wi-Fi OBSSs) is analyzed both qualitatively and quantitatively via system-level simulations, and the main issues to be faced are identified from these results. From that, distributed coordination mechanisms are proposed and evaluated via system-level simulations, both for the inter-technology coexistence problem and intra-technology coexistence problem. Results indicate that the proposed solutions provide significant gains when compare to the situation without distributed coordination.
Resumo:
Precise relative sea level (RSL) data are important for inferring regional ice sheet histories, as well as helping to validate numerical models of ice sheet evolution and glacial isostatic adjustment. Here we develop a new RSL curve for Fildes Peninsula, South Shetland Islands (SSIs), a sub-Antarctic archipelago peripheral to the northern Antarctic Peninsula ice sheet, by integrating sedimentary evidence from isolation basins with geomorphological evidence from raised beaches. This combined approach yields not only a Holocene RSL curve, but also the spatial pattern of how RSL change varied across the archipelago. The curve shows a mid-Holocene RSL highstand on Fildes Peninsula at 15.5 m above mean sea level between 8000 and 7000 cal a BP. Subsequently RSL gradually fell as a consequence of isostatic uplift in response to regional deglaciation. We propose that isostatic uplift occurred at a non-steady rate, with a temporary pause in ice retreat ca. 7200 cal a BP, leading to a short-lived RSL rise of ~1 m and forming a second peak to the mid-Holocene highstand. Two independent approaches were taken to constrain the long-term tectonic uplift rate of the SSIs at 0.22-0.48 m/ka, placing the tectonic contribution to the reconstructed RSL highstand between 1.4 and 2.9 m. Finally, we make comparisons to predictions from three global sea level models.
Resumo:
L’Internet Physique (IP) est une initiative qui identifie plusieurs symptômes d’inefficacité et non-durabilité des systèmes logistiques et les traite en proposant un nouveau paradigme appelé logistique hyperconnectée. Semblable à l’Internet Digital, qui relie des milliers de réseaux d’ordinateurs personnels et locaux, IP permettra de relier les systèmes logistiques fragmentés actuels. Le but principal étant d’améliorer la performance des systèmes logistiques des points de vue économique, environnemental et social. Se concentrant spécifiquement sur les systèmes de distribution, cette thèse remet en question l’ordre de magnitude du gain de performances en exploitant la distribution hyperconnectée habilitée par IP. Elle concerne également la caractérisation de la planification de la distribution hyperconnectée. Pour répondre à la première question, une approche de la recherche exploratoire basée sur la modélisation de l’optimisation est appliquée, où les systèmes de distribution actuels et potentiels sont modélisés. Ensuite, un ensemble d’échantillons d’affaires réalistes sont créé, et leurs performances économique et environnementale sont évaluées en ciblant de multiples performances sociales. Un cadre conceptuel de planification, incluant la modélisation mathématique est proposé pour l’aide à la prise de décision dans des systèmes de distribution hyperconnectée. Partant des résultats obtenus par notre étude, nous avons démontré qu’un gain substantiel peut être obtenu en migrant vers la distribution hyperconnectée. Nous avons également démontré que l’ampleur du gain varie en fonction des caractéristiques des activités et des performances sociales ciblées. Puisque l’Internet physique est un sujet nouveau, le Chapitre 1 présente brièvement l’IP et hyper connectivité. Le Chapitre 2 discute les fondements, l’objectif et la méthodologie de la recherche. Les défis relevés au cours de cette recherche sont décrits et le type de contributions visés est mis en évidence. Le Chapitre 3 présente les modèles d’optimisation. Influencés par les caractéristiques des systèmes de distribution actuels et potentiels, trois modèles fondés sur le système de distribution sont développés. Chapitre 4 traite la caractérisation des échantillons d’affaires ainsi que la modélisation et le calibrage des paramètres employés dans les modèles. Les résultats de la recherche exploratoire sont présentés au Chapitre 5. Le Chapitre 6 décrit le cadre conceptuel de planification de la distribution hyperconnectée. Le chapitre 7 résume le contenu de la thèse et met en évidence les contributions principales. En outre, il identifie les limites de la recherche et les avenues potentielles de recherches futures.
Resumo:
Policymakers make many demands of our schools to produce academic success. At the same time, community organizations, government agencies, faith-based institutions, and other groups often are providing support to students and their families, especially those from high-poverty backgrounds, that are meant to impact education but are often insufficient, uncoordinated, or redundant. In many cases, these institutions lack access to schools and school leaders. What’s missing from the dominant education reform discourse is a coordinated education-focused approach that mobilizes community assets to effectively improve academic and developmental outcomes for students. This study explores how education-focused comprehensive community change initiatives (CCIs) that utilize a partnership approach are organized and sustained. In this study, I examine three research questions: 1. Why and how do school system-level community change initiative (CCI) partnerships form? 2. What are the organizational, financial, and political structures that support sustainable CCIs? What, in particular, are their connections to the school systems they seek to impact? 3. What are the leadership functions and structures found within CCIs? How are leadership functions distributed across schools and agencies within communities? To answer these questions, I used a cross-case study approach that employed a secondary data analysis of data that were collected as part of a larger research study sponsored by a national organization. The original study design included site visits and extended interviews with educators, community leaders and practitioners about community school initiatives, one type of CCI. This study demonstrates that characteristics of sustained education-focused CCIs include leaders that are critical to starting the CCIs and are willing to collaborate across institutions, a focus on community problems, building on previous efforts, strategies to improve service delivery, a focus on education and schools in particular, organizational arrangements that create shared leadership and ownership for the CCI, an intermediary to support the initial vision and collaborative leadership groups, diversified funding approaches, and political support. These findings add to the literature about the growing number of education-focused CCIs. The study’s primary recommendation—that institutions need to work across boundaries in order to sustain CCIs organizationally, financially, and politically—can help policymakers as they develop new collaborative approaches to achieving educational goals.
Resumo:
Network intrusion detection sensors are usually built around low level models of network traffic. This means that their output is of a similarly low level and as a consequence, is difficult to analyze. Intrusion alert correlation is the task of automating some of this analysis by grouping related alerts together. Attack graphs provide an intuitive model for such analysis. Unfortunately alert flooding attacks can still cause a loss of service on sensors, and when performing attack graph correlation, there can be a large number of extraneous alerts included in the output graph. This obscures the fine structure of genuine attacks and makes them more difficult for human operators to discern. This paper explores modified correlation algorithms which attempt to minimize the impact of this attack.
Resumo:
This portfolio thesis describes work undertaken by the author under the Engineering Doctorate program of the Institute for System Level Integration. It was carried out in conjunction with the sponsor company Teledyne Defence Limited. A radar warning receiver is a device used to detect and identify the emissions of radars. They were originally developed during the Second World War and are found today on a variety of military platforms as part of the platform’s defensive systems. Teledyne Defence has designed and built components and electronic subsystems for the defence industry since the 1970s. This thesis documents part of the work carried out to create Phobos, Teledyne Defence’s first complete radar warning receiver. Phobos was designed to be the first low cost radar warning receiver. This was made possible by the reuse of existing Teledyne Defence products, commercial off the shelf hardware and advanced UK government algorithms. The challenges of this integration are described and discussed, with detail given of the software architecture and the development of the embedded application. Performance of the embedded system as a whole is described and qualified within the context of a low cost system.
Resumo:
Os Organismos públicos encontram-se, actualmente, a desenvolver estudos a nível dos Sistemas Integrados de Gestão, motivados pela necessidade que existe em adoptar novas técnicas de Gestão capazes de responder às novas exigências de informação. Em termos gerais, os Organismos públicos encontram-se numa fase de viragem na sua actuação, em que surge a necessidade de dispor de novos sistemas de informação capazes de dar resposta às exigências da Nova Gestão Pública. É certo que a Nova Gestão Pública conduz a uma maior motivação, proporcionando uma melhoria na obtenção dos resultados e modernizando a relação entre o controlo das despesas públicas e a prestação de contas a nível dos órgãos do Estado, e onde a uniformização de critérios se apresenta como um dos principais requisitos para criar condições de implementação de uma contabilidade pública que funcione como instrumento de apoio aos utilizadores da informação e, em particular, aos Órgãos de Chefia e Direcção. Este trabalho analisa o caso do Exército Português, como exemplo de um Organismo público que aproveitou a obrigatoriedade de adesão ao novo Regime de Administração Financeira do Estado, para promover a implementação de um Sistema Integrado de Gestão capaz de responder às novas exigências de informação. Analisam-se também as estratégias de actuação e reorientação organizacional utilizadas pelo Exército Português, de modo a permitir o desenvolvimento e implementação do sistema baseado na uniformização de critérios que garanta os requisitos e as técnicas de Gestão capazes de criar condições para desenvolver uma contabilidade pública que funcione como instrumento de apoio à decisão. ABSTRACT; At present, government entities are developing studies at an Integrated Management System level, impelled by the actual need of adopting new Management techniques capable of responding to the new demands regarding Information. ln global terms, government entities are reaching a turning point in its way of acting, due to the arising need of settling new information systems which provide an answer to the demands of the New Public Management. It is assured that the New Public Management leads to a higher motivation, providing an improvement in accomplishing results and modernizing the link between the control of public expenditure and presenting accounts of State Organs. It also presents the criteria standards as one of the main requirements to create implementation conditions of a public accounting which operates as a support mean to the information users and, more specifically, to the Command and Boarding Bodies. This study analyses the Portuguese Army, an example of government entity which seized the obligation of joining the New Public Financial Management Regime to promote the implementation of an Integrated Management System capable of responding to the new information demands. The performing strategies and organizational refocus used by the Portuguese Army are also analyzed in order to allow the development and implementation of the system. It is based in the standard criteria that secure the requirements and Management techniques which enable the progress of a public accounting acting as a support resource in decision-making.
Resumo:
The concept of patient activation has gained traction as the term referring to patients who understand their role in the care process and have “the knowledge, skills and confidence” necessary to manage their illness over time (Hibbard & Mahoney, 2010). Improving health outcomes for vulnerable and underserved populations who bear a disproportionate burden of health disparities presents unique challenges for nurse practitioners who provide primary care in nurse-managed health centers. Evidence that activation improves patient self-management is prompting the search for theory-based self-management support interventions to activate patients for self-management, improve health outcomes, and sustain long-term gains. Yet, no previous studies investigated the relationship between Self-determination Theory (SDT; Deci & Ryan, 2000) and activation. The major purpose of this study, guided by the Triple Aim (Berwick, Nolan, & Whittington, 2008) and nested in the Chronic Care Model (Wagner et al., 2001), was to examine the degree to which two constructs– Autonomy Support and Autonomous Motivation– independently predicted Patient Activation, controlling for covariates. For this study, 130 nurse-managed health center patients completed an on-line 38-item survey onsite. The two independent measures were the 6-item Modified Health Care Climate Questionnaire (mHCCQ; Williams, McGregor, King, Nelson, & Glasgow, 2005; Cronbach’s alpha =0.89) and the 8-item adapted Treatment Self-Regulation Questionnaire (TSRQ; Williams, Freedman, & Deci, 1998; Cronbach’s alpha = 0.80). The Patient Activation Measure (PAM-13; Hibbard, Mahoney, Stock, & Tusler, 2005; Cronbach’s alpha = 0.89) was the dependent measure. Autonomy Support was the only significant predictor, explaining 19.1% of the variance in patient activation. Five of six autonomy support survey items regressed on activation were significant, illustrating autonomy supportive communication styles contributing to activation. These results suggest theory-based patient, provider, and system level interventions to enhance self-management in primary care and educational and professional development curricula. Future investigations should examine additional sources of autonomy support and different measurements of autonomous motivation to improve the predictive power of the model. Longitudinal analyses should be conducted to further understand the relationship between autonomy support and autonomous motivation with patient activation, based on the premise that patient activation will sustain behavior change.
Resumo:
Many tissue level models of neural networks are written in the language of nonlinear integro-differential equations. Analytical solutions have only been obtained for the special case that the nonlinearity is a Heaviside function. Thus the pursuit of even approximate solutions to such models is of interest to the broad mathematical neuroscience community. Here we develop one such scheme, for stationary and travelling wave solutions, that can deal with a certain class of smoothed Heaviside functions. The distribution that smoothes the Heaviside is viewed as a fundamental object, and all expressions describing the scheme are constructed in terms of integrals over this distribution. The comparison of our scheme and results from direct numerical simulations is used to highlight the very good levels of approximation that can be achieved by iterating the process only a small number of times.
Resumo:
Catering to society’s demand for high performance computing, billions of transistors are now integrated on IC chips to deliver unprecedented performances. With increasing transistor density, the power consumption/density is growing exponentially. The increasing power consumption directly translates to the high chip temperature, which not only raises the packaging/cooling costs, but also degrades the performance/reliability and life span of the computing systems. Moreover, high chip temperature also greatly increases the leakage power consumption, which is becoming more and more significant with the continuous scaling of the transistor size. As the semiconductor industry continues to evolve, power and thermal challenges have become the most critical challenges in the design of new generations of computing systems. In this dissertation, we addressed the power/thermal issues from the system-level perspective. Specifically, we sought to employ real-time scheduling methods to optimize the power/thermal efficiency of the real-time computing systems, with leakage/ temperature dependency taken into consideration. In our research, we first explored the fundamental principles on how to employ dynamic voltage scaling (DVS) techniques to reduce the peak operating temperature when running a real-time application on a single core platform. We further proposed a novel real-time scheduling method, “M-Oscillations” to reduce the peak temperature when scheduling a hard real-time periodic task set. We also developed three checking methods to guarantee the feasibility of a periodic real-time schedule under peak temperature constraint. We further extended our research from single core platform to multi-core platform. We investigated the energy estimation problem on the multi-core platforms and developed a light weight and accurate method to calculate the energy consumption for a given voltage schedule on a multi-core platform. Finally, we concluded the dissertation with elaborated discussions of future extensions of our research.