34 resultados para Reinforcement Learning,resource-constrained devices,iOS devices,on-device machine learning

em Université de Lausanne, Switzerland


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Abstract This thesis proposes a set of adaptive broadcast solutions and an adaptive data replication solution to support the deployment of P2P applications. P2P applications are an emerging type of distributed applications that are running on top of P2P networks. Typical P2P applications are video streaming, file sharing, etc. While interesting because they are fully distributed, P2P applications suffer from several deployment problems, due to the nature of the environment on which they perform. Indeed, defining an application on top of a P2P network often means defining an application where peers contribute resources in exchange for their ability to use the P2P application. For example, in P2P file sharing application, while the user is downloading some file, the P2P application is in parallel serving that file to other users. Such peers could have limited hardware resources, e.g., CPU, bandwidth and memory or the end-user could decide to limit the resources it dedicates to the P2P application a priori. In addition, a P2P network is typically emerged into an unreliable environment, where communication links and processes are subject to message losses and crashes, respectively. To support P2P applications, this thesis proposes a set of services that address some underlying constraints related to the nature of P2P networks. The proposed services include a set of adaptive broadcast solutions and an adaptive data replication solution that can be used as the basis of several P2P applications. Our data replication solution permits to increase availability and to reduce the communication overhead. The broadcast solutions aim, at providing a communication substrate encapsulating one of the key communication paradigms used by P2P applications: broadcast. Our broadcast solutions typically aim at offering reliability and scalability to some upper layer, be it an end-to-end P2P application or another system-level layer, such as a data replication layer. Our contributions are organized in a protocol stack made of three layers. In each layer, we propose a set of adaptive protocols that address specific constraints imposed by the environment. Each protocol is evaluated through a set of simulations. The adaptiveness aspect of our solutions relies on the fact that they take into account the constraints of the underlying system in a proactive manner. To model these constraints, we define an environment approximation algorithm allowing us to obtain an approximated view about the system or part of it. This approximated view includes the topology and the components reliability expressed in probabilistic terms. To adapt to the underlying system constraints, the proposed broadcast solutions route messages through tree overlays permitting to maximize the broadcast reliability. Here, the broadcast reliability is expressed as a function of the selected paths reliability and of the use of available resources. These resources are modeled in terms of quotas of messages translating the receiving and sending capacities at each node. To allow a deployment in a large-scale system, we take into account the available memory at processes by limiting the view they have to maintain about the system. Using this partial view, we propose three scalable broadcast algorithms, which are based on a propagation overlay that tends to the global tree overlay and adapts to some constraints of the underlying system. At a higher level, this thesis also proposes a data replication solution that is adaptive both in terms of replica placement and in terms of request routing. At the routing level, this solution takes the unreliability of the environment into account, in order to maximize reliable delivery of requests. At the replica placement level, the dynamically changing origin and frequency of read/write requests are analyzed, in order to define a set of replica that minimizes communication cost.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Many species are able to learn to associate behaviours with rewards as this gives fitness advantages in changing environments. Social interactions between population members may, however, require more cognitive abilities than simple trial-and-error learning, in particular the capacity to make accurate hypotheses about the material payoff consequences of alternative action combinations. It is unclear in this context whether natural selection necessarily favours individuals to use information about payoffs associated with nontried actions (hypothetical payoffs), as opposed to simple reinforcement of realized payoff. Here, we develop an evolutionary model in which individuals are genetically determined to use either trial-and-error learning or learning based on hypothetical reinforcements, and ask what is the evolutionarily stable learning rule under pairwise symmetric two-action stochastic repeated games played over the individual's lifetime. We analyse through stochastic approximation theory and simulations the learning dynamics on the behavioural timescale, and derive conditions where trial-and-error learning outcompetes hypothetical reinforcement learning on the evolutionary timescale. This occurs in particular under repeated cooperative interactions with the same partner. By contrast, we find that hypothetical reinforcement learners tend to be favoured under random interactions, but stable polymorphisms can also obtain where trial-and-error learners are maintained at a low frequency. We conclude that specific game structures can select for trial-and-error learning even in the absence of costs of cognition, which illustrates that cost-free increased cognition can be counterselected under social interactions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND: Gastric banding still represents one of the most widely used bariatric procedures. It provides acceptable weight loss in many patients, but has frequent long-term complications. Because different types of bands may lead to different results, we designed a randomized study to compare the Lapband® with the SAGB®. We hereby report on the long-term results. METHODS: Between December 1998 and June 2002, 180 morbidly obese patients were randomized between Lapband® or SAGB®. Weight loss, long-term morbidity, and need for reoperation were evaluated. RESULTS: Long-term weight loss did not differ between the two bands. Patients who maintained their band had an acceptable long-term weight loss of between 50 and 60 % EBMIL. In both groups, about half the patients developed long-term complications, with about 50 % requiring major redo surgery. There was no difference in the overall rates of long-term complications or failures between the two groups, but patients who had a Lapband® were significantly more prone to develop band slippage/pouch dilatation (13.3 versus 0 %, p < 0,001). CONCLUSIONS: Although in the absence of complication, gastric banding leads to acceptable weight loss; the long-term complication and major reoperation rates are very high independently from the type of band used or on the operative technique. Gastric banding leads to relatively poor overall long-term results and therefore should not be considered the procedure of choice for the treatment of morbid obesity. Patients should be informed of the limited overall weight loss and the very high complication rates.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In order to understand the development of non-genetically encoded actions during an animal's lifespan, it is necessary to analyze the dynamics and evolution of learning rules producing behavior. Owing to the intrinsic stochastic and frequency-dependent nature of learning dynamics, these rules are often studied in evolutionary biology via agent-based computer simulations. In this paper, we show that stochastic approximation theory can help to qualitatively understand learning dynamics and formulate analytical models for the evolution of learning rules. We consider a population of individuals repeatedly interacting during their lifespan, and where the stage game faced by the individuals fluctuates according to an environmental stochastic process. Individuals adjust their behavioral actions according to learning rules belonging to the class of experience-weighted attraction learning mechanisms, which includes standard reinforcement and Bayesian learning as special cases. We use stochastic approximation theory in order to derive differential equations governing action play probabilities, which turn out to have qualitative features of mutator-selection equations. We then perform agent-based simulations to find the conditions where the deterministic approximation is closest to the original stochastic learning process for standard 2-action 2-player fluctuating games, where interaction between learning rules and preference reversal may occur. Finally, we analyze a simplified model for the evolution of learning in a producer-scrounger game, which shows that the exploration rate can interact in a non-intuitive way with other features of co-evolving learning rules. Overall, our analyses illustrate the usefulness of applying stochastic approximation theory in the study of animal learning.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

To compare the prediction of hip fracture risk of several bone ultrasounds (QUS), 7062 Swiss women > or =70 years of age were measured with three QUSs (two of the heel, one of the phalanges). Heel QUSs were both predictive of hip fracture risk, whereas the phalanges QUS was not. INTRODUCTION: As the number of hip fracture is expected to increase during these next decades, it is important to develop strategies to detect subjects at risk. Quantitative bone ultrasound (QUS), an ionizing radiation-free method, which is transportable, could be interesting for this purpose. MATERIALS AND METHODS: The Swiss Evaluation of the Methods of Measurement of Osteoporotic Fracture Risk (SEMOF) study is a multicenter cohort study, which compared three QUSs for the assessment of hip fracture risk in a sample of 7609 elderly ambulatory women > or =70 years of age. Two QUSs measured the heel (Achilles+; GE-Lunar and Sahara; Hologic), and one measured the heel (DBM Sonic 1200; IGEA). The Cox proportional hazards regression was used to estimate the hazard of the first hip fracture, adjusted for age, BMI, and center, and the area under the ROC curves were calculated to compare the devices and their parameters. RESULTS: From the 7609 women who were included in the study, 7062 women 75.2 +/- 3.1 (SD) years of age were prospectively followed for 2.9 +/- 0.8 years. Eighty women reported a hip fracture. A decrease by 1 SD of the QUS variables corresponded to an increase of the hip fracture risk from 2.3 (95% CI, 1.7, 3.1) to 2.6 (95% CI, 1.9, 3.4) for the three variables of Achilles+ and from 2.2 (95% CI, 1.7, 3.0) to 2.4 (95% CI, 1.8, 3.2) for the three variables of Sahara. Risk gradients did not differ significantly among the variables of the two heel QUS devices. On the other hand, the phalanges QUS (DBM Sonic 1200) was not predictive of hip fracture risk, with an adjusted hazard risk of 1.2 (95% CI, 0.9, 1.5), even after reanalysis of the digitalized data and using different cut-off levels (1700 or 1570 m/s). CONCLUSIONS: In this elderly women population, heel QUS devices were both predictive of hip fracture risk, whereas the phalanges QUS device was not.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

When individuals learn by trial-and-error, they perform randomly chosen actions and then reinforce those actions that led to a high payoff. However, individuals do not always have to physically perform an action in order to evaluate its consequences. Rather, they may be able to mentally simulate actions and their consequences without actually performing them. Such fictitious learners can select actions with high payoffs without making long chains of trial-and-error learning. Here, we analyze the evolution of an n-dimensional cultural trait (or artifact) by learning, in a payoff landscape with a single optimum. We derive the stochastic learning dynamics of the distance to the optimum in trait space when choice between alternative artifacts follows the standard logit choice rule. We show that for both trial-and-error and fictitious learners, the learning dynamics stabilize at an approximate distance of root n/(2 lambda(e)) away from the optimum, where lambda(e) is an effective learning performance parameter depending on the learning rule under scrutiny. Individual learners are thus unlikely to reach the optimum when traits are complex (n large), and so face a barrier to further improvement of the artifact. We show, however, that this barrier can be significantly reduced in a large population of learners performing payoff-biased social learning, in which case lambda(e) becomes proportional to population size. Overall, our results illustrate the effects of errors in learning, levels of cognition, and population size for the evolution of complex cultural traits. (C) 2013 Elsevier Inc. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We examine the relationship between structural social capital, resource assembly, and firm performance of entrepreneurs in Africa. We posit that social capital primarily composed of kinship or family ties helps the entrepreneur to raise resources, but it does so at a cost. Using data drawn from small firms in Kampala, Uganda, we explore how shared identity among the entrepreneur's social network moderates this relationship. A large network contributed a higher quantity of resources raised, but at a higher cost when shared identity was high. We discuss the implications of these findings for the role of family ties and social capital in resource assembly, with an emphasis on developing economies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

1. Niche theory predicts that the stable coexistence of species within a guild should be associated, if resources are limited, with a mechanism of resource partitioning. Using extensive data on diets, the present study attempts: (i) to test the hypothesis that, in sympatry, the interspecific overlap between the trophic niches of the sibling bat species Myotis myotis and M. blythii-which coexist intimately in their roosts-is effectively lower than the two intraspecific overlaps; (ii) to assess the role played by interspecific competition in resource partitioning through the study of trophic niche displacement between several sympatric and allopatric populations. 2. Diets were determined by the analysis of faecal samples collected in the field from individual bats captured in various geographical areas. Trophic niche overlaps were calculated monthly for all possible intraspecific and interspecific pairs of individuals from sympatric populations. Niche breadth was estimated from: (i) every faecal sample; (ii) all the faecal samples collected per month in a given population (geographical area). 3. In every population, the bulk of the diets of M. myotis and M. blythii consisted of, respectively, terrestrial (e.g. carabid beetles) and grass-dwelling (mostly bush crickets) prey. All intraspecific trophic niche overlaps were significantly greater than the interspecific one, except in Switzerland in May when both species exploited mass concentrations of cockchafers, a non-limiting food source. This clearcut partitioning of resources may allow the stable, intimate coexistence observed under sympatric conditions. 4. Relative proportions of ground-and grass-dwelling prey, as well as niche breadths (either individual or population), did not differ significantly between sympatry and allopatry, showing that, under allopatric conditions, niche expansion does not take place. This suggests that active interspecific competition is not the underlying mechanism responsible for the niche partitioning which is currently observed between M. myotis and M. blythii.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND: Combination highly active antiretroviral therapy (HAART) has significantly decreased HIV-1 related morbidity and mortality globally transforming HIV into a controllable condition. HAART has a number of limitations though, including limited access in resource constrained countries, which have driven the search for simpler, affordable HIV-1 treatment modalities. Therapeutic HIV-1 vaccines aim to provide immunological support to slow disease progression and decrease transmission. We evaluated the safety, immunogenicity and clinical effect of a novel recombinant plasmid DNA therapeutic HIV-1 vaccine, GTU(®)-multi-HIVB, containing 6 different genes derived from an HIV-1 subtype B isolate. METHODS: 63 untreated, healthy, HIV-1 infected, adults between 18 and 40 years were enrolled in a single-blinded, placebo-controlled Phase II trial in South Africa. Subjects were HIV-1 subtype C infected, had never received antiretrovirals, with CD4 ≥ 350 cells/mm(3) and pHIV-RNA ≥ 50 copies/mL at screening. Subjects were allocated to vaccine or placebo groups in a 2:1 ratio either administered intradermally (ID) (0.5mg/dose) or intramuscularly (IM) (1mg/dose) at 0, 4 and 12 weeks boosted at 76 and 80 weeks with 1mg/dose (ID) and 2mg/dose (IM), respectively. Safety was assessed by adverse event monitoring and immunogenicity by HIV-1-specific CD4+ and CD8+ T-cells using intracellular cytokine staining (ICS), pHIV-RNA and CD4 counts. RESULTS: Vaccine was safe and well tolerated with no vaccine related serious adverse events. Significant declines in log pHIV-RNA (p=0.012) and increases in CD4+ T cell counts (p=0.066) were observed in the vaccine group compared to placebo, more pronounced after IM administration and in some HLA haplotypes (B*5703) maintained for 17 months after the final immunisation. CONCLUSIONS: The GTU(®)-multi-HIVB plasmid recombinant DNA therapeutic HIV-1 vaccine is safe, well tolerated and favourably affects pHIV-RNA and CD4 counts in untreated HIV-1 infected individuals after IM administration in subjects with HLA B*57, B*8101 and B*5801 haplotypes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Learning what to approach, and what to avoid, involves assigning value to environmental cues that predict positive and negative events. Studies in animals indicate that the lateral habenula encodes the previously learned negative motivational value of stimuli. However, involvement of the habenula in dynamic trial-by-trial aversive learning has not been assessed, and the functional role of this structure in humans remains poorly characterized, in part, due to its small size. Using high-resolution functional neuroimaging and computational modeling of reinforcement learning, we demonstrate positive habenula responses to the dynamically changing values of cues signaling painful electric shocks, which predict behavioral suppression of responses to those cues across individuals. By contrast, negative habenula responses to monetary reward cue values predict behavioral invigoration. Our findings show that the habenula plays a key role in an online aversive learning system and in generating associated motivated behavior in humans.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Spatial data analysis mapping and visualization is of great importance in various fields: environment, pollution, natural hazards and risks, epidemiology, spatial econometrics, etc. A basic task of spatial mapping is to make predictions based on some empirical data (measurements). A number of state-of-the-art methods can be used for the task: deterministic interpolations, methods of geostatistics: the family of kriging estimators (Deutsch and Journel, 1997), machine learning algorithms such as artificial neural networks (ANN) of different architectures, hybrid ANN-geostatistics models (Kanevski and Maignan, 2004; Kanevski et al., 1996), etc. All the methods mentioned above can be used for solving the problem of spatial data mapping. Environmental empirical data are always contaminated/corrupted by noise, and often with noise of unknown nature. That's one of the reasons why deterministic models can be inconsistent, since they treat the measurements as values of some unknown function that should be interpolated. Kriging estimators treat the measurements as the realization of some spatial randomn process. To obtain the estimation with kriging one has to model the spatial structure of the data: spatial correlation function or (semi-)variogram. This task can be complicated if there is not sufficient number of measurements and variogram is sensitive to outliers and extremes. ANN is a powerful tool, but it also suffers from the number of reasons. of a special type ? multiplayer perceptrons ? are often used as a detrending tool in hybrid (ANN+geostatistics) models (Kanevski and Maignank, 2004). Therefore, development and adaptation of the method that would be nonlinear and robust to noise in measurements, would deal with the small empirical datasets and which has solid mathematical background is of great importance. The present paper deals with such model, based on Statistical Learning Theory (SLT) - Support Vector Regression. SLT is a general mathematical framework devoted to the problem of estimation of the dependencies from empirical data (Hastie et al, 2004; Vapnik, 1998). SLT models for classification - Support Vector Machines - have shown good results on different machine learning tasks. The results of SVM classification of spatial data are also promising (Kanevski et al, 2002). The properties of SVM for regression - Support Vector Regression (SVR) are less studied. First results of the application of SVR for spatial mapping of physical quantities were obtained by the authorsin for mapping of medium porosity (Kanevski et al, 1999), and for mapping of radioactively contaminated territories (Kanevski and Canu, 2000). The present paper is devoted to further understanding of the properties of SVR model for spatial data analysis and mapping. Detailed description of the SVR theory can be found in (Cristianini and Shawe-Taylor, 2000; Smola, 1996) and basic equations for the nonlinear modeling are given in section 2. Section 3 discusses the application of SVR for spatial data mapping on the real case study - soil pollution by Cs137 radionuclide. Section 4 discusses the properties of the modelapplied to noised data or data with outliers.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Following their detection and seizure by police and border guard authorities, false identity and travel documents are usually scanned, producing digital images. This research investigates the potential of these images to classify false identity documents, highlight links between documents produced by a same modus operandi or same source, and thus support forensic intelligence efforts. Inspired by previous research work about digital images of Ecstasy tablets, a systematic and complete method has been developed to acquire, collect, process and compare images of false identity documents. This first part of the article highlights the critical steps of the method and the development of a prototype that processes regions of interest extracted from images. Acquisition conditions have been fine-tuned in order to optimise reproducibility and comparability of images. Different filters and comparison metrics have been evaluated and the performance of the method has been assessed using two calibration and validation sets of documents, made up of 101 Italian driving licenses and 96 Portuguese passports seized in Switzerland, among which some were known to come from common sources. Results indicate that the use of Hue and Edge filters or their combination to extract profiles from images, and then the comparison of profiles with a Canberra distance-based metric provides the most accurate classification of documents. The method appears also to be quick, efficient and inexpensive. It can be easily operated from remote locations and shared amongst different organisations, which makes it very convenient for future operational applications. The method could serve as a first fast triage method that may help target more resource-intensive profiling methods (based on a visual, physical or chemical examination of documents for instance). Its contribution to forensic intelligence and its application to several sets of false identity documents seized by police and border guards will be developed in a forthcoming article (part II).

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The majority of transcatheter aortic valve implantations, structural heart procedures and the newly developed transcatheter mitral valve repair and replacement are traditionally performed either through a transfemoral or a transapical access site, depending on the presence of severe peripheral vascular disease or anatomic limitations. The transapical approach, which carries specific advantages related to its antegrade nature and the short distance between the introduction site and the cardiac target, is traditionally performed through a left anterolateral mini-thoracotomy and requires rib retractors, soft tissue retractors and reinforced apical sutures to secure, at first, the left ventricular apex for the introduction of the stent-valve delivery systems and then to seal the access site at the end of the procedure. However, despite the advent of low-profile apical sheaths and newly designed delivery systems, the apical approach represents a challenge for the surgeon, as it has the risk of apical tear, life-threatening apical bleeding, myocardial damage, coronary damage and infections. Last but not least, the use of large-calibre stent-valve delivery systems and devices through standard mini-thoracotomies compromises any attempt to perform transapical transcatheter structural heart procedures entirely percutaneously, as happens with the transfemoral access site, or via a thoracoscopic or a miniaturised video-assisted percutaneous technique. During the past few years, prototypes of apical access and closure devices for transapical heart valve procedures have been developed and tested to make this standardised successful procedure easier. Some of them represent an important step towards the development of truly percutaneous transcatheter transapical heart valve procedures in the clinical setting.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Hypoglycemia, if recurrent, may have severe consequences on cognitive and psychomotor development of neonates. Therefore, screening for hypoglycemia is a daily routine in every facility taking care of newborn infants. Point-of-care-testing (POCT) devices are interesting for neonatal use, as their handling is easy, measurements can be performed at bedside, demanded blood volume is small and results are readily available. However, such whole blood measurements are challenged by a wide variation of hematocrit in neonates and a spectrum of normal glucose concentration at the lower end of the test range. We conducted a prospective trial to check precision and accuracy of the best suitable POCT device for neonatal use from three leading companies in Europe. Of the three devices tested (Precision Xceed, Abbott; Elite XL, Bayer; Aviva Nano, Roche), Aviva Nano exhibited the best precision. None completely fulfilled the ISO-accuracy-criteria 15197: 2003 or 2011. Aviva Nano fulfilled these criteria in 92% of cases while the others were <87%. Precision Xceed reached the 95% limit of the 2003 ISO-criteria for values ≤4.2 mmol/L, but not for the higher range (71%). Although validated for adults, new POCT devices need to be specifically evaluated on newborn infants before adopting their routine use in neonatology.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

While mobile technologies can provide great personalized services for mobile users, they also threaten their privacy. Such personalization-privacy paradox are particularly salient for context aware technology based mobile applications where user's behaviors, movement and habits can be associated with a consumer's personal identity. In this thesis, I studied the privacy issues in the mobile context, particularly focus on an adaptive privacy management system design for context-aware mobile devices, and explore the role of personalization and control over user's personal data. This allowed me to make multiple contributions, both theoretical and practical. In the theoretical world, I propose and prototype an adaptive Single-Sign On solution that use user's context information to protect user's private information for smartphone. To validate this solution, I first proved that user's context is a unique user identifier and context awareness technology can increase user's perceived ease of use of the system and service provider's authentication security. I then followed a design science research paradigm and implemented this solution into a mobile application called "Privacy Manager". I evaluated the utility by several focus group interviews, and overall the proposed solution fulfilled the expected function and users expressed their intentions to use this application. To better understand the personalization-privacy paradox, I built on the theoretical foundations of privacy calculus and technology acceptance model to conceptualize the theory of users' mobile privacy management. I also examined the role of personalization and control ability on my model and how these two elements interact with privacy calculus and mobile technology model. In the practical realm, this thesis contributes to the understanding of the tradeoff between the benefit of personalized services and user's privacy concerns it may cause. By pointing out new opportunities to rethink how user's context information can protect private data, it also suggests new elements for privacy related business models.