881 resultados para epidemic routing
Resumo:
The Border Gateway Protocol (BGP) is the current inter-domain routing protocol used to exchange reachability information between Autonomous Systems (ASes) in the Internet. BGP supports policy-based routing which allows each AS to independently define a set of local policies on which routes it accepts and advertises from/to other networks, as well as on which route it prefers when more than one route becomes available. However, independently chosen local policies may cause global conflicts, which result in protocol divergence. In this paper, we propose a new algorithm, called Adaptive Policy Management Scheme(APMS), to resolve policy conflicts in a distributed manner. Akin to distributed feedback control systems, each AS independently classifies the state of the network as either conflict-free or potentially conflicting by observing its local history only (namely, route flaps). Based on the degree of measured conflicts, each AS dynamically adjusts its own path preferences---increasing its preference for observably stable paths over flapping paths. APMS also includes a mechanism to distinguish route flaps due to topology changes, so as not to confuse them with those due to policy conflicts. A correctness and convergence analysis of APMS based on the sub-stability property of chosen paths is presented. Implementation in the SSF network simulator is performed, and simulation results for different performance metrics are presented. The metrics capture the dynamic performance (in terms of instantaneous throughput, delay, etc.) of APMS and other competing solutions, thus exposing the often neglected aspects of performance.
Resumo:
In a typical overlay network for routing or content sharing, each node must select a fixed number of immediate overlay neighbors for routing traffic or content queries. A selfish node entering such a network would select neighbors so as to minimize the weighted sum of expected access costs to all its destinations. Previous work on selfish neighbor selection has built intuition with simple models where edges are undirected, access costs are modeled by hop-counts, and nodes have potentially unbounded degrees. However, in practice, important constraints not captured by these models lead to richer games with substantively and fundamentally different outcomes. Our work models neighbor selection as a game involving directed links, constraints on the number of allowed neighbors, and costs reflecting both network latency and node preference. We express a node's "best response" wiring strategy as a k-median problem on asymmetric distance, and use this formulation to obtain pure Nash equilibria. We experimentally examine the properties of such stable wirings on synthetic topologies, as well as on real topologies and maps constructed from PlanetLab and AS-level Internet measurements. Our results indicate that selfish nodes can reap substantial performance benefits when connecting to overlay networks composed of non-selfish nodes. On the other hand, in overlays that are dominated by selfish nodes, the resulting stable wirings are optimized to such great extent that even non-selfish newcomers can extract near-optimal performance through naive wiring strategies.
Resumo:
Overlay networks have become popular in recent times for content distribution and end-system multicasting of media streams. In the latter case, the motivation is based on the lack of widespread deployment of IP multicast and the ability to perform end-host processing. However, constructing routes between various end-hosts, so that data can be streamed from content publishers to many thousands of subscribers, each having their own QoS constraints, is still a challenging problem. First, any routes between end-hosts using trees built on top of overlay networks can increase stress on the underlying physical network, due to multiple instances of the same data traversing a given physical link. Second, because overlay routes between end-hosts may traverse physical network links more than once, they increase the end-to-end latency compared to IP-level routing. Third, algorithms for constructing efficient, large-scale trees that reduce link stress and latency are typically more complex. This paper therefore compares various methods to construct multicast trees between end-systems, that vary in terms of implementation costs and their ability to support per-subscriber QoS constraints. We describe several algorithms that make trade-offs between algorithmic complexity, physical link stress and latency. While no algorithm is best in all three cases we show how it is possible to efficiently build trees for several thousand subscribers with latencies within a factor of two of the optimal, and link stresses comparable to, or better than, existing technologies.
Resumo:
Contemporary Irish data on the prevalence of major cardiovascular disease (CVD) risk factors are sparse. The primary aims of this study were (1) to estimate the prevalence of major cardiovascular disease risk factors, including Type 2 Diabetes Mellitus, in the general population of men and women between the ages of 50 and 69 years; and (2) to estimate the proportion of individuals in this age group at high absolute risk of cardiovascular disease events on the basis of pre-existing cardiovascular disease or as defined by the Framingham equation. Participants were drawn from the practice lists of 17 general practices in Cork and Kerry using stratified random sampling. A total of 1018 people attended for screening (490 men, 48%) from 1473 who were invited, a response rate of 69.1%. Cardiovascular disease risk factors and glucose intolerance are common in the population of men and women aged between 50 and 69 years. Almost half the participants were overweight and a further quarter met current international criteria for obesity, one of the highest recorded prevalence rates for obesity in a European population sample. Forty per cent of the population reported minimal levels of physical activity and 19% were current cigarette smokers. Approximately half the sample had blood pressure readings consistent with international criteria for the diagnosis of hypertension, but only 38% of these individuals were known to be hypertensive. Eighty per cent of the population sample had a cholesterol concentration in excess of 5 mmol/l. Almost 4% of the population had Type 2 Diabetes Mellitus, of whom 30% were previously undiagnosed. A total of 137 participants (13.5%) had a history or ECG findings consistent with established cardiovascular disease. Of the remaining 881 individuals in the primary prevention population, a total of 20 high-risk individuals (19 male) had a risk of a coronary heart disease event 30% over ten years according to the Framingham risk equation, giving an overall population prevalence of 2.0% (95% CI 1.3 - 3.0). At a risk level 20% over ten years, an additional 91 individuals (8.9%) were identified. Thus a total of 24.4% of the population were at risk either through pre-existing CVD (13.5%) or an estimated 10-year risk exceeding 20% according to the Framingham risk equation (10.9%). Thus a substantial proportion of middle-aged men are at high risk of CVD. The findings emphasise the scale of the CVD epidemic in Ireland and the need for ongoing monitoring of risk factors at the population level and the need to develop preventive strategies at both the clinical and societal level.
Resumo:
In this research we focus on the Tyndall 25mm and 10mm nodes energy-aware topology management to extend sensor network lifespan and optimise node power consumption. The two tiered Tyndall Heterogeneous Automated Wireless Sensors (THAWS) tool is used to quickly create and configure application-specific sensor networks. To this end, we propose to implement a distributed route discovery algorithm and a practical energy-aware reaction model on the 25mm nodes. Triggered by the energy-warning events, the miniaturised Tyndall 10mm data collector nodes adaptively and periodically change their association to 25mm base station nodes, while 25mm nodes also change the inter-connections between themselves, which results in reconfiguration of the 25mm nodes tier topology. The distributed routing protocol uses combined weight functions to balance the sensor network traffic. A system level simulation is used to quantify the benefit of the route management framework when compared to other state of the art approaches in terms of the system power-saving.
Resumo:
Schizophrenia represents one of the world’s most devastating illnesses due to its often lifelong course and debilitating nature. The treatment of schizophrenia has vastly improved over recent decades with the discovery of several antipsychotic compounds; however these drugs are not without adverse effects that must be addressed to maximize their therapeutic value. Newer, atypical, antipsychotics are associated with a compilation of serious metabolic side effects including weight gain, insulin resistance, fat deposition, glucose dysregulation and ensuing co-morbidities such as type II diabetes mellitus. The mechanisms underlying these side effects remain to be fully elucidated and adequate interventions are lacking. Further understanding of the factors that contribute these side effects is therefore required in order to develop effective adjunctive therapies and to potentially design antipsychotic drugs in the future with reduced impact on the metabolic health of patients. We investigated if the gut microbiota represented a novel mechanism contributing to the metabolic dysfunction associated with atypical antipsychotics. The gut microbiota comprises the bacteria that exist symbiotically within the gastrointestinal tract, and has been shown in recent years to be involved in several aspects of energy balance and metabolism. We have demonstrated that administration of certain antipsychotics in the rat results in an altered microbiota profile and, moreover, that the microbiota is required for the full scale of metabolic dysfunction to occur. We have further shown that specific antibiotics can attenuate certain aspects of olanzapine and risperidone–induced metabolic dysfunction, in particular fat deposition and adipose tissue inflammation. Mechanisms underlying this novel link appear to involve energy utilization via expression of lipogenic genes as well as reduced inflammatory tone. Taken together, these data indicate that the gut microbiota is an important factor involved in the myriad of metabolic complications associated with antipsychotic therapy. Furthermore, these data support the future investigation of microbial-based therapeutics for not only antipsychotic-induced weight gain but also for tackling the global obesity epidemic.
Resumo:
A wireless sensor network can become partitioned due to node failure, requiring the deployment of additional relay nodes in order to restore network connectivity. This introduces an optimisation problem involving a tradeoff between the number of additional nodes that are required and the costs of moving through the sensor field for the purpose of node placement. This tradeoff is application-dependent, influenced for example by the relative urgency of network restoration. In addition, minimising the number of relay nodes might lead to long routing paths to the sink, which may cause problems of data latency. This data latency is extremely important in wireless sensor network applications such as battlefield surveillance, intrusion detection, disaster rescue, highway traffic coordination, etc. where they must not violate the real-time constraints. Therefore, we also consider the problem of deploying multiple sinks in order to improve the network performance. Previous research has only parts of this problem in isolation, and has not properly considered the problems of moving through a constrained environment or discovering changes to that environment during the repair or network quality after the restoration. In this thesis, we firstly consider a base problem in which we assume the exploration tasks have already been completed, and so our aim is to optimise our use of resources in the static fully observed problem. In the real world, we would not know the radio and physical environments after damage, and this creates a dynamic problem where damage must be discovered. Therefore, we extend to the dynamic problem in which the network repair problem considers both exploration and restoration. We then add a hop-count constraint for network quality in which the desired locations can talk to a sink within a hop count limit after the network is restored. For each new problem of the network repair, we have proposed different solutions (heuristics and/or complete algorithms) which prioritise different objectives. We evaluate our solutions based on simulation, assessing the quality of solutions (node cost, movement cost, computation time, and total restoration time) by varying the problem types and the capability of the agent that makes the repair. We show that the relative importance of the objectives influences the choice of algorithm, and different speeds of movement for the repairing agent have a significant impact on performance, and must be taken into account when selecting the algorithm. In particular, the node-based approaches are the best in the node cost, and the path-based approaches are the best in the mobility cost. For the total restoration time, the node-based approaches are the best with a fast moving agent while the path-based approaches are the best with a slow moving agent. For a medium speed moving agent, the total restoration time of the node-based approaches and that of the path-based approaches are almost balanced.
Resumo:
My original contribution to knowledge is the creation of a WSN system that further improves the functionality of existing technology, whilst achieving improved power consumption and reliability. This thesis concerns the development of industrially applicable wireless sensor networks that are low-power, reliable and latency aware. This work aims to improve upon the state of the art in networking protocols for low-rate multi-hop wireless sensor networks. Presented is an application-driven co-design approach to the development of such a system. Starting with the physical layer, hardware was designed to meet industry specified requirements. The end system required further investigation of communications protocols that could achieve the derived application-level system performance specifications. A CSMA/TDMA hybrid MAC protocol was developed, leveraging numerous techniques from the literature and novel optimisations. It extends the current art with respect to power consumption for radio duty-cycled applications, and reliability, in dense wireless sensor networks, whilst respecting latency bounds. Specifically, it provides 100% packet delivery for 11 concurrent senders transmitting towards a single radio duty cycled sink-node. This is representative of an order of magnitude improvement over the comparable art, considering MAC-only mechanisms. A novel latency-aware routing protocol was developed to exploit the developed hardware and MAC protocol. It is based on a new weighted objective function with multiple fail safe mechanisms to ensure extremely high reliability and robustness. The system was empirically evaluated on two hardware platforms. These are the application-specific custom 868 MHz node and the de facto community-standard TelosB. Extensive empirical comparative performance analyses were conducted against the relevant art to demonstrate the advances made. The resultant system is capable of exceeding 10-year battery life, and exhibits reliability performance in excess of 99.9%.
Resumo:
Background: Childhood obesity is a global epidemic posing a significant threat to the health and wellbeing of children. To reverse this epidemic, it is essential that we gain a deeper understanding of the complex array of driving factors at an individual, family and wider ecological level. Using a social-ecological framework, this thesis investigates the direction, magnitude and contribution of risk factors for childhood overweight and obesity at multiple levels of influence, with a particular focus on diet and physical activity. Methods: A systematic review was conducted to describe recent trends (from 2002-2012) in childhood overweight and obesity prevalence in Irish school children from the Republic of Ireland. Two datasets (Cork Children’s Lifestyle [CCLaS] Study and the Growing Up in Ireland [GUI] Study) were used to explore determinants of childhood overweight and obesity. Individual lifestyle factors examined were diet, physical activity and sedentary behaviour. The determinants of physical activity were also explored. Family factors examined were parental weight status and household socio-economic status. The impact of food access in the local area on diet quality and body mass index (BMI) was investigated as an environmental level risk factor. Results: Between 2002 and 2012, the prevalence of childhood overweight and obesity in Ireland remained stable. There was some evidence to suggest that childhood obesity rates may have decreased slightly though one in four Irish children remained either overweight or obese. In the CCLaS study, overweight and obese children consumed more unhealthy foods than normal weight children. A diet quality score was constructed based on a previously validated adult diet score. Each one unit increase in diet quality was significantly associated with a decreased risk of childhood overweight and obesity. Individual level factors (including gender, being a member of a sports team, weight status) were more strongly associated with physical activity levels than family or environmental factors. Overweight and obese children were more sedentary and less active than normal weight children. There was a dose response relationship between time spent at moderate to vigorous physical activity (MVPA) and the risk of childhood obesity independent of sedentary time. In contrast, total sedentary time was not associated with the risk of childhood obesity independent of MVPA though screen time was associated with childhood overweight and obesity. In the GUI Study, only one in five children had 2 normal weight parents (or one normal weight parent in the case of single parent families). Having overweight and obese parents was a significant risk factor for overweight and obesity regardless of socio-economic characteristics of the household. Family income was not associated with the odds of childhood obesity but social class and parental education were important risk factors for childhood obesity. Access to food stores in the local environment did not impact dietary quality or the BMI of Irish children. However, there was some evidence to suggest that the economic resources of the family influenced diet and BMI. Discussion: Though childhood overweight and obesity rates appear to have stabilised over the previous decade, prevalence rates are unacceptably high. As expected, overweight and obesity were associated with a high energy intake and poor dietary quality. The findings also highlight strong associations between physical inactivity and the risk of overweight and obesity, with effect sizes greater than what have been typically found in adults. Important family level determinants of childhood overweight and obesity were also identified. The findings highlight the need for a multifaceted approach, targeting a range of modifiable determinants to tackle the problem. In particular, policies and interventions at the shared family environment or community level may be an effective mean of tackling this current epidemic.
Resumo:
In this work we introduce a new mathematical tool for optimization of routes, topology design, and energy efficiency in wireless sensor networks. We introduce a vector field formulation that models communication in the network, and routing is performed in the direction of this vector field at every location of the network. The magnitude of the vector field at every location represents the density of amount of data that is being transited through that location. We define the total communication cost in the network as the integral of a quadratic form of the vector field over the network area. With the above formulation, we introduce a mathematical machinery based on partial differential equations very similar to the Maxwell's equations in electrostatic theory. We show that in order to minimize the cost, the routes should be found based on the solution of these partial differential equations. In our formulation, the sensors are sources of information, and they are similar to the positive charges in electrostatics, the destinations are sinks of information and they are similar to negative charges, and the network is similar to a non-homogeneous dielectric media with variable dielectric constant (or permittivity coefficient). In one of the applications of our mathematical model based on the vector fields, we offer a scheme for energy efficient routing. Our routing scheme is based on changing the permittivity coefficient to a higher value in the places of the network where nodes have high residual energy, and setting it to a low value in the places of the network where the nodes do not have much energy left. Our simulations show that our method gives a significant increase in the network life compared to the shortest path and weighted shortest path schemes. Our initial focus is on the case where there is only one destination in the network, and later we extend our approach to the case where there are multiple destinations in the network. In the case of having multiple destinations, we need to partition the network into several areas known as regions of attraction of the destinations. Each destination is responsible for collecting all messages being generated in its region of attraction. The complexity of the optimization problem in this case is how to define regions of attraction for the destinations and how much communication load to assign to each destination to optimize the performance of the network. We use our vector field model to solve the optimization problem for this case. We define a vector field, which is conservative, and hence it can be written as the gradient of a scalar field (also known as a potential field). Then we show that in the optimal assignment of the communication load of the network to the destinations, the value of that potential field should be equal at the locations of all the destinations. Another application of our vector field model is to find the optimal locations of the destinations in the network. We show that the vector field gives the gradient of the cost function with respect to the locations of the destinations. Based on this fact, we suggest an algorithm to be applied during the design phase of a network to relocate the destinations for reducing the communication cost function. The performance of our proposed schemes is confirmed by several examples and simulation experiments. In another part of this work we focus on the notions of responsiveness and conformance of TCP traffic in communication networks. We introduce the notion of responsiveness for TCP aggregates and define it as the degree to which a TCP aggregate reduces its sending rate to the network as a response to packet drops. We define metrics that describe the responsiveness of TCP aggregates, and suggest two methods for determining the values of these quantities. The first method is based on a test in which we drop a few packets from the aggregate intentionally and measure the resulting rate decrease of that aggregate. This kind of test is not robust to multiple simultaneous tests performed at different routers. We make the test robust to multiple simultaneous tests by using ideas from the CDMA approach to multiple access channels in communication theory. Based on this approach, we introduce tests of responsiveness for aggregates, and call it CDMA based Aggregate Perturbation Method (CAPM). We use CAPM to perform congestion control. A distinguishing feature of our congestion control scheme is that it maintains a degree of fairness among different aggregates. In the next step we modify CAPM to offer methods for estimating the proportion of an aggregate of TCP traffic that does not conform to protocol specifications, and hence may belong to a DDoS attack. Our methods work by intentionally perturbing the aggregate by dropping a very small number of packets from it and observing the response of the aggregate. We offer two methods for conformance testing. In the first method, we apply the perturbation tests to SYN packets being sent at the start of the TCP 3-way handshake, and we use the fact that the rate of ACK packets being exchanged in the handshake should follow the rate of perturbations. In the second method, we apply the perturbation tests to the TCP data packets and use the fact that the rate of retransmitted data packets should follow the rate of perturbations. In both methods, we use signature based perturbations, which means packet drops are performed with a rate given by a function of time. We use analogy of our problem with multiple access communication to find signatures. Specifically, we assign orthogonal CDMA based signatures to different routers in a distributed implementation of our methods. As a result of orthogonality, the performance does not degrade because of cross interference made by simultaneously testing routers. We have shown efficacy of our methods through mathematical analysis and extensive simulation experiments.
Resumo:
Droplet-based digital microfluidics technology has now come of age, and software-controlled biochips for healthcare applications are starting to emerge. However, today's digital microfluidic biochips suffer from the drawback that there is no feedback to the control software from the underlying hardware platform. Due to the lack of precision inherent in biochemical experiments, errors are likely during droplet manipulation; error recovery based on the repetition of experiments leads to wastage of expensive reagents and hard-to-prepare samples. By exploiting recent advances in the integration of optical detectors (sensors) into a digital microfluidics biochip, we present a physical-aware system reconfiguration technique that uses sensor data at intermediate checkpoints to dynamically reconfigure the biochip. A cyberphysical resynthesis technique is used to recompute electrode-actuation sequences, thereby deriving new schedules, module placement, and droplet routing pathways, with minimum impact on the time-to-response. © 2012 IEEE.
Resumo:
BACKGROUND: In the face of the HIV/AIDS epidemic that has contributed to the dramatic increase in orphans and abandoned children (OAC) worldwide, caregiver attitudes about HIV, and HIV-related stigma, are two attributes that may affect caregiving. Little research has considered the relationship between caregiver attributes and caregiver-reported HIV-related stigma. In light of the paucity of this literature, this paper will describe HIV-related stigma among caregivers of OAC in five less wealthy nations. METHODS: Baseline data were collected between May 2006 through February 2008. The sample included 1,480 community-based and 192 institution-based caregivers. Characteristics of the community-based and institution-based caregivers are described using means and standard deviations for continuous variables or counts and percentages for categorical variables. We fit logistic regression models, both for the full sample and separately for community-based and institution-based caregivers, to explore predictors of acceptance of HIV. RESULTS: Approximately 80% of both community-based and institution-based caregivers were female; and 84% of institution-based caregivers, compared to 66% of community-based caregivers, said that they would be willing to care for a relative with HIV. Similar proportions were reported when caregivers were asked if they were willing to let their child play with an HIV-infected child. In a multivariable model predicting willingness to care for an HIV-infected relative, adjusted for site fixed effects, being an institution-based caregiver was associated with greater willingness (less stigma) than community-based caregivers. Decreased willingness was reported by older respondents, while willingness increased with greater formal education. In the adjusted models predicting willingness to allow one's child to play with an HIV-infected child, female gender and older age was associated with less willingness. However, willingness was positively associated with years of formal education. CONCLUSIONS: The caregiver-child relationship is central to a child's development. OAC already face stigma as a result of their orphaned or abandoned status; the addition of HIV-related stigma represents a double burden for these children. Further research on the prevalence of HIV-related acceptance and stigma among caregivers and implications of such stigma for child development will be critical as the policy community responds to the global HIV/AIDS orphan crisis.
Resumo:
Scheduling a set of jobs over a collection of machines to optimize a certain quality-of-service measure is one of the most important research topics in both computer science theory and practice. In this thesis, we design algorithms that optimize {\em flow-time} (or delay) of jobs for scheduling problems that arise in a wide range of applications. We consider the classical model of unrelated machine scheduling and resolve several long standing open problems; we introduce new models that capture the novel algorithmic challenges in scheduling jobs in data centers or large clusters; we study the effect of selfish behavior in distributed and decentralized environments; we design algorithms that strive to balance the energy consumption and performance.
The technically interesting aspect of our work is the surprising connections we establish between approximation and online algorithms, economics, game theory, and queuing theory. It is the interplay of ideas from these different areas that lies at the heart of most of the algorithms presented in this thesis.
The main contributions of the thesis can be placed in one of the following categories.
1. Classical Unrelated Machine Scheduling: We give the first polygorithmic approximation algorithms for minimizing the average flow-time and minimizing the maximum flow-time in the offline setting. In the online and non-clairvoyant setting, we design the first non-clairvoyant algorithm for minimizing the weighted flow-time in the resource augmentation model. Our work introduces iterated rounding technique for the offline flow-time optimization, and gives the first framework to analyze non-clairvoyant algorithms for unrelated machines.
2. Polytope Scheduling Problem: To capture the multidimensional nature of the scheduling problems that arise in practice, we introduce Polytope Scheduling Problem (\psp). The \psp problem generalizes almost all classical scheduling models, and also captures hitherto unstudied scheduling problems such as routing multi-commodity flows, routing multicast (video-on-demand) trees, and multi-dimensional resource allocation. We design several competitive algorithms for the \psp problem and its variants for the objectives of minimizing the flow-time and completion time. Our work establishes many interesting connections between scheduling and market equilibrium concepts, fairness and non-clairvoyant scheduling, and queuing theoretic notion of stability and resource augmentation analysis.
3. Energy Efficient Scheduling: We give the first non-clairvoyant algorithm for minimizing the total flow-time + energy in the online and resource augmentation model for the most general setting of unrelated machines.
4. Selfish Scheduling: We study the effect of selfish behavior in scheduling and routing problems. We define a fairness index for scheduling policies called {\em bounded stretch}, and show that for the objective of minimizing the average (weighted) completion time, policies with small stretch lead to equilibrium outcomes with small price of anarchy. Our work gives the first linear/ convex programming duality based framework to bound the price of anarchy for general equilibrium concepts such as coarse correlated equilibrium.
Resumo:
BACKGROUND: The obesity epidemic has spread to young adults, leading to significant public health implications later in adulthood. Intervention in early adulthood may be an effective public health strategy for reducing the long-term health impact of the epidemic. Few weight loss trials have been conducted in young adults. It is unclear what weight loss strategies are beneficial in this population. PURPOSE: To describe the design and rationale of the NHLBI-sponsored Cell Phone Intervention for You (CITY) study, which is a single center, randomized three-arm trial that compares the impact on weight loss of 1) a behavioral intervention that is delivered almost entirely via cell phone technology (Cell Phone group); and 2) a behavioral intervention delivered mainly through monthly personal coaching calls enhanced by self-monitoring via cell phone (Personal Coaching group), each compared to 3) a usual care, advice-only control condition. METHODS: A total of 365 community-dwelling overweight/obese adults aged 18-35 years were randomized to receive one of these three interventions for 24 months in parallel group design. Study personnel assessing outcomes were blinded to group assignment. The primary outcome is weight change at 24 [corrected] months. We hypothesize that each active intervention will cause more weight loss than the usual care condition. Study completion is anticipated in 2014. CONCLUSIONS: If effective, implementation of the CITY interventions could mitigate the alarming rates of obesity in young adults through promotion of weight loss. ClinicalTrial.gov: NCT01092364.
Resumo:
BACKGROUND/AIMS: The obesity epidemic has spread to young adults, and obesity is a significant risk factor for cardiovascular disease. The prominence and increasing functionality of mobile phones may provide an opportunity to deliver longitudinal and scalable weight management interventions in young adults. The aim of this article is to describe the design and development of the intervention tested in the Cell Phone Intervention for You study and to highlight the importance of adaptive intervention design that made it possible. The Cell Phone Intervention for You study was a National Heart, Lung, and Blood Institute-sponsored, controlled, 24-month randomized clinical trial comparing two active interventions to a usual-care control group. Participants were 365 overweight or obese (body mass index≥25 kg/m2) young adults. METHODS: Both active interventions were designed based on social cognitive theory and incorporated techniques for behavioral self-management and motivational enhancement. Initial intervention development occurred during a 1-year formative phase utilizing focus groups and iterative, participatory design. During the intervention testing, adaptive intervention design, where an intervention is updated or extended throughout a trial while assuring the delivery of exactly the same intervention to each cohort, was employed. The adaptive intervention design strategy distributed technical work and allowed introduction of novel components in phases intended to help promote and sustain participant engagement. Adaptive intervention design was made possible by exploiting the mobile phone's remote data capabilities so that adoption of particular application components could be continuously monitored and components subsequently added or updated remotely. RESULTS: The cell phone intervention was delivered almost entirely via cell phone and was always-present, proactive, and interactive-providing passive and active reminders, frequent opportunities for knowledge dissemination, and multiple tools for self-tracking and receiving tailored feedback. The intervention changed over 2 years to promote and sustain engagement. The personal coaching intervention, alternatively, was primarily personal coaching with trained coaches based on a proven intervention, enhanced with a mobile application, but where all interactions with the technology were participant-initiated. CONCLUSION: The complexity and length of the technology-based randomized clinical trial created challenges in engagement and technology adaptation, which were generally discovered using novel remote monitoring technology and addressed using the adaptive intervention design. Investigators should plan to develop tools and procedures that explicitly support continuous remote monitoring of interventions to support adaptive intervention design in long-term, technology-based studies, as well as developing the interventions themselves.