2 resultados para Healthcare, Pervasive Mobile Computing, Wearable AR-Glasses, Context-Awareness, Google Android

em Duke University


Relevância:

40.00% 40.00%

Publicador:

Resumo:

As the burden of non-communicable diseases increases worldwide, it is imperative that health systems adopt delivery approaches that will enable them to provide accessible, high-quality, and low-cost care to patients that need consistent management of their lifelong conditions. This is especially true in low- and middle-income country settings, such as India, where the disease burden is high and the health sector resources to address it are limited. The subscription-based, managed care model that SughaVazhvu Healthcare—a non-profit social enterprise operating in rural Thanjavur, Tamil Nadu—has deployed demonstrates potential for ensuring continuity of care among chronic care patients in resource-strained areas. However, its effectiveness and sustainability will depend on its ability to positively impact patient health status and patient satisfaction with the care management they are receiving. Therefore, this study is not only a program appraisal to aid operational quality improvement of the SughaVazhvu Healthcare model, but also an attempt to identify the factors that affect patient satisfaction among individuals with chronic conditions actively availing services.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Distributed Computing frameworks belong to a class of programming models that allow developers to

launch workloads on large clusters of machines. Due to the dramatic increase in the volume of

data gathered by ubiquitous computing devices, data analytic workloads have become a common

case among distributed computing applications, making Data Science an entire field of

Computer Science. We argue that Data Scientist's concern lays in three main components: a dataset,

a sequence of operations they wish to apply on this dataset, and some constraint they may have

related to their work (performances, QoS, budget, etc). However, it is actually extremely

difficult, without domain expertise, to perform data science. One need to select the right amount

and type of resources, pick up a framework, and configure it. Also, users are often running their

application in shared environments, ruled by schedulers expecting them to specify precisely their resource

needs. Inherent to the distributed and concurrent nature of the cited frameworks, monitoring and

profiling are hard, high dimensional problems that block users from making the right

configuration choices and determining the right amount of resources they need. Paradoxically, the

system is gathering a large amount of monitoring data at runtime, which remains unused.

In the ideal abstraction we envision for data scientists, the system is adaptive, able to exploit

monitoring data to learn about workloads, and process user requests into a tailored execution

context. In this work, we study different techniques that have been used to make steps toward

such system awareness, and explore a new way to do so by implementing machine learning

techniques to recommend a specific subset of system configurations for Apache Spark applications.

Furthermore, we present an in depth study of Apache Spark executors configuration, which highlight

the complexity in choosing the best one for a given workload.