49 resultados para Computing and software systems
Resumo:
Both obesity and asthma are highly prevalent, complex diseases modified by multiple factors. Genetic, developmental, lung mechanical, immunological and behavioural factors have all been suggested as playing a causal role between the two entities; however, their complex mechanistic interactions are still poorly understood and evidence of causality in children remains scant. Equally lacking is evidence of effective treatment strategies, despite the fact that imbalances at vulnerable phases in childhood can impact long-term health. This review is targeted at both clinicians frequently faced with the dilemma of how to investigate and treat the obese asthmatic child and researchers interested in the topic. Highlighting the breadth of the spectrum of factors involved, this review collates evidence regarding the investigation and treatment of asthma in obese children, particularly in comparison with current approaches in 'difficult-to-treat' childhood asthma. Finally, the authors propose hypotheses for future research from a systems-based perspective.
Resumo:
Indoor positioning has attracted considerable attention for decades due to the increasing demands for location based services. In the past years, although numerous methods have been proposed for indoor positioning, it is still challenging to find a convincing solution that combines high positioning accuracy and ease of deployment. Radio-based indoor positioning has emerged as a dominant method due to its ubiquitousness, especially for WiFi. RSSI (Received Signal Strength Indicator) has been investigated in the area of indoor positioning for decades. However, it is prone to multipath propagation and hence fingerprinting has become the most commonly used method for indoor positioning using RSSI. The drawback of fingerprinting is that it requires intensive labour efforts to calibrate the radio map prior to experiments, which makes the deployment of the positioning system very time consuming. Using time information as another way for radio-based indoor positioning is challenged by time synchronization among anchor nodes and timestamp accuracy. Besides radio-based positioning methods, intensive research has been conducted to make use of inertial sensors for indoor tracking due to the fast developments of smartphones. However, these methods are normally prone to accumulative errors and might not be available for some applications, such as passive positioning. This thesis focuses on network-based indoor positioning and tracking systems, mainly for passive positioning, which does not require the participation of targets in the positioning process. To achieve high positioning accuracy, we work on some information of radio signals from physical-layer processing, such as timestamps and channel information. The contributions in this thesis can be divided into two parts: time-based positioning and channel information based positioning. First, for time-based indoor positioning (especially for narrow-band signals), we address challenges for compensating synchronization offsets among anchor nodes, designing timestamps with high resolution, and developing accurate positioning methods. Second, we work on range-based positioning methods with channel information to passively locate and track WiFi targets. Targeting less efforts for deployment, we work on range-based methods, which require much less calibration efforts than fingerprinting. By designing some novel enhanced methods for both ranging and positioning (including trilateration for stationary targets and particle filter for mobile targets), we are able to locate WiFi targets with high accuracy solely relying on radio signals and our proposed enhanced particle filter significantly outperforms the other commonly used range-based positioning algorithms, e.g., a traditional particle filter, extended Kalman filter and trilateration algorithms. In addition to using radio signals for passive positioning, we propose a second enhanced particle filter for active positioning to fuse inertial sensor and channel information to track indoor targets, which achieves higher tracking accuracy than tracking methods solely relying on either radio signals or inertial sensors.
Resumo:
Percentile shares provide an intuitive and easy-to-understand way for analyzing income or wealth distributions. A celebrated example are the top income shares sported by the works of Thomas Piketty and colleagues. Moreover, series of percentile shares, defined as differences between Lorenz ordinates, can be used to visualize whole distributions or changes in distributions. In this talk, I present a new command called pshare that computes and graphs percentile shares (or changes in percentile shares) from individual level data. The command also provides confidence intervals and supports survey estimation.
Resumo:
Percentile shares provide an intuitive and easy-to-understand way for analyzing income or wealth distributions. A celebrated example is the top income shares sported by the works of Thomas Piketty and colleagues. Moreover, series of percentile shares, defined as differences between Lorenz ordinates, can be used to visualize whole distributions or changes in distributions. In this talk, I present a new command called pshare that computes and graphs percentile shares (or changes in percentile shares) from individual level data. The command also provides confidence intervals and supports survey estimation.
Resumo:
Software systems need to continuously change to remain useful. Change appears in several forms and needs to be accommodated at different levels. We propose ChangeBoxes as a mechanism to encapsulate, manage, analyze and exploit changes to software systems. Our thesis is that only by making change explicit and manipulable can we enable the software developer to manage software change more effectively than is currently possible. Furthermore we argue that we need new insights into assessing the impact of changes and we need to provide new tools and techniques to manage them. We report on the results of some initial prototyping efforts, and we outline a series of research activities that we have started to explore the potential of ChangeBoxes.
Resumo:
The increasing amount of data available about software systems poses new challenges for re- and reverse engineering research, as the proposed approaches need to scale. In this context, concerns about meta-modeling and analysis techniques need to be augmented by technical concerns about how to reuse and how to build upon the efforts of previous research. Moose is an extensive infrastructure for reverse engineering evolved for over 10 years that promotes the reuse of engineering efforts in research. Moose accommodates various types of data modeled in the FAMIX family of meta-models. The goal of this half-day workshop is to strengthen the community of researchers and practitioners who are working in re- and reverse engineering, by providing a forum for building future research starting from Moose and FAMIX as shared infrastructure.
Resumo:
The increasing amount of data available about software systems poses new challenges for re- and reverse engineering research, as the proposed approaches need to scale. In this context, concerns about meta-modeling and analysis techniques need to be augmented by technical concerns about how to reuse and how to build upon the efforts of previous research. MOOSE is an extensive infrastructure for reverse engineering evolved for over 10 years that promotes the reuse of engineering efforts in research. MOOSE accommodates various types of data modeled in the FAMIX family of meta-models. The goal of this half-day workshop is to strengthen the community of researchers and practitioners who are working in re- and reverse engineering, by providing a forum for building future research starting from MOOSE and FAMIX as shared infrastructure.
Resumo:
The goal of this roadmap paper is to summarize the state-of-the-art and identify research challenges when developing, deploying and managing self-adaptive software systems. Instead of dealing with a wide range of topics associated with the field, we focus on four essential topics of self-adaptation: design space for self-adaptive solutions, software engineering processes for self-adaptive systems, from centralized to decentralized control, and practical run-time verification & validation for self-adaptive systems. For each topic, we present an overview, suggest future directions, and focus on selected challenges. This paper complements and extends a previous roadmap on software engineering for self-adaptive systems published in 2009 covering a different set of topics, and reflecting in part on the previous paper. This roadmap is one of the many results of the Dagstuhl Seminar 10431 on Software Engineering for Self-Adaptive Systems, which took place in October 2010.
Resumo:
Cloud Computing enables provisioning and distribution of highly scalable services in a reliable, on-demand and sustainable manner. However, objectives of managing enterprise distributed applications in cloud environments under Service Level Agreement (SLA) constraints lead to challenges for maintaining optimal resource control. Furthermore, conflicting objectives in management of cloud infrastructure and distributed applications might lead to violations of SLAs and inefficient use of hardware and software resources. This dissertation focusses on how SLAs can be used as an input to the cloud management system, increasing the efficiency of allocating resources, as well as that of infrastructure scaling. First, we present an extended SLA semantic model for modelling complex service-dependencies in distributed applications, and for enabling automated cloud infrastructure management operations. Second, we describe a multi-objective VM allocation algorithm for optimised resource allocation in infrastructure clouds. Third, we describe a method of discovering relations between the performance indicators of services belonging to distributed applications and then using these relations for building scaling rules that a CMS can use for automated management of VMs. Fourth, we introduce two novel VM-scaling algorithms, which optimally scale systems composed of VMs, based on given SLA performance constraints. All presented research works were implemented and tested using enterprise distributed applications.
Resumo:
Software evolution research has focused mostly on analyzing the evolution of single software systems. However, it is rarely the case that a project exists as standalone, independent of others. Rather, projects exist in parallel within larger contexts in companies, research groups or even the open-source communities. We call these contexts software ecosystems, and on this paper we present The Small Project Observatory, a prototype tool which aims to support the analysis of project ecosystems through interactive visualization and exploration. We present a case-study of exploring an ecosystem using our tool, we describe about the architecture of the tool, and we distill the lessons learned during the tool-building experience.
Resumo:
The biggest challenge facing software developers today is how to gracefully evolve complex software systems in the face of changing requirements. We clearly need software systems to be more dynamic, compositional and model-centric, but instead we continue to build systems that are static, baroque and inflexible. How can we better build change-enabled systems in the future? To answer this question, we propose to look back to one of the most successful systems to support change, namely Smalltalk. We briefly introduce Smalltalk with a few simple examples, and draw some lessons for software evolution. Smalltalk's simplicity, its reflective design, and its highly dynamic nature all go a long way towards enabling change in Smalltalk applications. We then illustrate how these lessons work in practice by reviewing a number of research projects that support software evolution by exploiting Smalltalk's design. We conclude by summarizing open issues and challenges for change-enabled systems of the future.
Resumo:
We propose an innovative, integrated, cost-effective health system to combat major non-communicable diseases (NCDs), including cardiovascular, chronic respiratory, metabolic, rheumatologic and neurologic disorders and cancers, which together are the predominant health problem of the 21st century. This proposed holistic strategy involves comprehensive patient-centered integrated care and multi-scale, multi-modal and multi-level systems approaches to tackle NCDs as a common group of diseases. Rather than studying each disease individually, it will take into account their intertwined gene-environment, socio-economic interactions and co-morbidities that lead to individual-specific complex phenotypes. It will implement a road map for predictive, preventive, personalized and participatory (P4) medicine based on a robust and extensive knowledge management infrastructure that contains individual patient information. It will be supported by strategic partnerships involving all stakeholders, including general practitioners associated with patient-centered care. This systems medicine strategy, which will take a holistic approach to disease, is designed to allow the results to be used globally, taking into account the needs and specificities of local economies and health systems.
Resumo:
Resilience research has been applied to socioeconomic as well as for agroecological studies in the last 20 years. It provides a conceptual and methodological approach for a better understanding of interrelations between the performance of ecological and social systems. In the research area Alto Beni, Bolivia, the production of cocoa (Theobroma cacao L.), is one of the main sources of income. Farmers in the region have formed producers’ associations to enhance organic cocoa cultivation and obtain fair prices since the 1980s. In cooperation with the long-term system comparisons by the Research Institute of Organic Agriculture (FiBL) in Alto Beni, aspects of the field trial are applied for the use in on-farm research: a comparison of soil fertility, biomass and crop diversity is combined with qualitative interviews and participatory observation methods. Fieldwork is carried out together with Bolivian students through the Swiss KFPE-programme Echanges Universitaires. For the system comparisons, four different land-use types were classified according to their ecological complexity during a preliminary study in 2009: successional agroforestry systems, simple agroforestry systems (both organically managed and certified), traditional systems and conventional monocultures. The study focuses on interrelations between different ways of cocoa cultivation, livelihoods and the related socio-cultural rationales behind them. In particular this second aspect is innovative as it allows to broaden the biophysical perspective to a more comprehensive evaluation with socio-ecological aspects thereby increasing the relevance of the agronomic field studies for development policy and practice. Moreover, such a socio-ecological baseline allows to assess the potential of organic agriculture regarding resilience-building face to socio-environmental stress factors. Among others, the results of the pre-study illustrate local farmers’ perceptions of climate change and the consequences for the different crop-systems: all interviewees mentioned rising temperatures and/or an extended dry season as negative impacts more with regard to their own working conditions than to their crops. This was the case in particular for conventional monocultures and in plots where slash-and-burn cultivation was practised whereas for organic agroforestry systems the advantage of working in the shade was stressed indicating that their relevance rises in the context of climate change.
Resumo:
The evolution of the Next Generation Networks, especially the wireless broadband access technologies such as Long Term Evolution (LTE) and Worldwide Interoperability for Microwave Access (WiMAX), have increased the number of "all-IP" networks across the world. The enhanced capabilities of these access networks has spearheaded the cloud computing paradigm, where the end-users aim at having the services accessible anytime and anywhere. The services availability is also related with the end-user device, where one of the major constraints is the battery lifetime. Therefore, it is necessary to assess and minimize the energy consumed by the end-user devices, given its significance for the user perceived quality of the cloud computing services. In this paper, an empirical methodology to measure network interfaces energy consumption is proposed. By employing this methodology, an experimental evaluation of energy consumption in three different cloud computing access scenarios (including WiMAX) were performed. The empirical results obtained show the impact of accurate network interface states management and application network level design in the energy consumption. Additionally, the achieved outcomes can be used in further software-based models to optimized energy consumption, and increase the Quality of Experience (QoE) perceived by the end-users.