979 resultados para Machines à vecteurs de support
Resumo:
This paper describes an optimized model to support QoS by mean of Congestion minimization on LSPs (Label Switching Path). In order to perform this model, we start from a CFA (Capacity and Flow Allocation) model. As this model does not consider the buffer size to calculate the capacity cost, our model- named BCA (Buffer Capacity Allocation)- take into account this issue and it improve the CFA performance. To test our proposal, we perform several simulations; results show that BCA model minimizes LSP congestion and uniformly distributes flows on the network
Resumo:
This work proposes an original contribution to the understanding of shermen spatial behavior, based on the behavioral ecology and movement ecology paradigms. Through the analysis of Vessel Monitoring System (VMS) data, we characterized the spatial behavior of Peruvian anchovy shermen at di erent scales: (1) the behavioral modes within shing trips (i.e., searching, shing and cruising); (2) the behavioral patterns among shing trips; (3) the behavioral patterns by shing season conditioned by ecosystem scenarios; and (4) the computation of maps of anchovy presence proxy from the spatial patterns of behavioral mode positions. At the rst scale considered, we compared several Markovian (hidden Markov and semi-Markov models) and discriminative models (random forests, support vector machines and arti cial neural networks) for inferring the behavioral modes associated with VMS tracks. The models were trained under a supervised setting and validated using tracks for which behavioral modes were known (from on-board observers records). Hidden semi-Markov models performed better, and were retained for inferring the behavioral modes on the entire VMS dataset. At the second scale considered, each shing trip was characterized by several features, including the time spent within each behavioral mode. Using a clustering analysis, shing trip patterns were classi ed into groups associated to management zones, eet segments and skippers' personalities. At the third scale considered, we analyzed how ecological conditions shaped shermen behavior. By means of co-inertia analyses, we found signi cant associations between shermen, anchovy and environmental spatial dynamics, and shermen behavioral responses were characterized according to contrasted environmental scenarios. At the fourth scale considered, we investigated whether the spatial behavior of shermen re ected to some extent the spatial distribution of anchovy. Finally, this work provides a wider view of shermen behavior: shermen are not only economic agents, but they are also foragers, constrained by ecosystem variability. To conclude, we discuss how these ndings may be of importance for sheries management, collective behavior analyses and end-to-end models.
Resumo:
In this paper, we propose two active learning algorithms for semiautomatic definition of training samples in remote sensing image classification. Based on predefined heuristics, the classifier ranks the unlabeled pixels and automatically chooses those that are considered the most valuable for its improvement. Once the pixels have been selected, the analyst labels them manually and the process is iterated. Starting with a small and nonoptimal training set, the model itself builds the optimal set of samples which minimizes the classification error. We have applied the proposed algorithms to a variety of remote sensing data, including very high resolution and hyperspectral images, using support vector machines. Experimental results confirm the consistency of the methods. The required number of training samples can be reduced to 10% using the methods proposed, reaching the same level of accuracy as larger data sets. A comparison with a state-of-the-art active learning method, margin sampling, is provided, highlighting advantages of the methods proposed. The effect of spatial resolution and separability of the classes on the quality of the selection of pixels is also discussed.
Resumo:
The Internet and new communication technologies are deeply affecting healthcare systems and the provision of care. The purpose of this article is to evaluate the possibility that cyberhealth, via the development of widespread easy access to wireless personal computers, tablets and smartphones, can effectively influence intake of medication and long-term medication adherence, which is a complex, difficult and dynamic behaviour to adopt and to sustain over time. Because of its novelty, the impact of cyberhealth on drug intake has not yet been well explored. Initial results have provided some evidence, but more research is needed to determine the impact of cyberhealth resources on long-term adherence and health outcomes, its user-friendliness and its adequacy in meeting e-patient needs. The purpose of such Internet-based interventions, which provide different levels of customisation, is not to take over the roles of healthcare providers; on the contrary, cyberhealth platforms should reinforce the alliance between healthcare providers and patients by filling time-gaps between visits and allowing patients to upload and/or share feedback material to be used during the visits. This shift, however, is not easily endorsed by healthcare providers, who must master new eHealth skills, but healthcare systems have a unique opportunity to invest in the Internet and to use this powerful tool to design the future of integrated care. Before this can occur, however, important issues must be addressed and resolved, for example ethical considerations, the scientific quality of programmes, reimbursement of activity, data security and the ownership of uploaded data.
Resumo:
The development of statistical models for forensic fingerprint identification purposes has been the subject of increasing research attention in recent years. This can be partly seen as a response to a number of commentators who claim that the scientific basis for fingerprint identification has not been adequately demonstrated. In addition, key forensic identification bodies such as ENFSI [1] and IAI [2] have recently endorsed and acknowledged the potential benefits of using statistical models as an important tool in support of the fingerprint identification process within the ACE-V framework. In this paper, we introduce a new Likelihood Ratio (LR) model based on Support Vector Machines (SVMs) trained with features discovered via morphometric and spatial analyses of corresponding minutiae configurations for both match and close non-match populations often found in AFIS candidate lists. Computed LR values are derived from a probabilistic framework based on SVMs that discover the intrinsic spatial differences of match and close non-match populations. Lastly, experimentation performed on a set of over 120,000 publicly available fingerprint images (mostly sourced from the National Institute of Standards and Technology (NIST) datasets) and a distortion set of approximately 40,000 images, is presented, illustrating that the proposed LR model is reliably guiding towards the right proposition in the identification assessment of match and close non-match populations. Results further indicate that the proposed model is a promising tool for fingerprint practitioners to use for analysing the spatial consistency of corresponding minutiae configurations.
Resumo:
Abstract This PhD thesis addresses the issue of alleviating the burden of developing ad hoc applications. Such applications have the particularity of running on mobile devices, communicating in a peer-to-peer manner and implement some proximity-based semantics. A typical example of such application can be a radar application where users see their avatar as well as the avatars of their friends on a map on their mobile phone. Such application become increasingly popular with the advent of the latest generation of mobile smart phones with their impressive computational power, their peer-to-peer communication capabilities and their location detection technology. Unfortunately, the existing programming support for such applications is limited, hence the need to address this issue in order to alleviate their development burden. This thesis specifically tackles this problem by providing several tools for application development support. First, it provides the location-based publish/subscribe service (LPSS), a communication abstraction, which elegantly captures recurrent communication issues and thus allows to dramatically reduce the code complexity. LPSS is implemented in a modular manner in order to be able to target two different network architectures. One pragmatic implementation is aimed at mainstream infrastructure-based mobile networks, where mobile devices can communicate through fixed antennas. The other fully decentralized implementation targets emerging mobile ad hoc networks (MANETs), where no fixed infrastructure is available and communication can only occur in a peer-to-peer fashion. For each of these architectures, various implementation strategies tailored for different application scenarios that can be parametrized at deployment time. Second, this thesis provides two location-based message diffusion protocols, namely 6Shot broadcast and 6Shot multicast, specifically aimed at MANETs and fine tuned to be used as building blocks for LPSS. Finally this thesis proposes Phomo, a phone motion testing tool that allows to test proximity semantics of ad hoc applications without having to move around with mobile devices. These different developing support tools have been packaged in a coherent middleware framework called Pervaho.
Batch effect confounding leads to strong bias in performance estimates obtained by cross-validation.
Resumo:
BACKGROUND: With the large amount of biological data that is currently publicly available, many investigators combine multiple data sets to increase the sample size and potentially also the power of their analyses. However, technical differences ("batch effects") as well as differences in sample composition between the data sets may significantly affect the ability to draw generalizable conclusions from such studies. FOCUS: The current study focuses on the construction of classifiers, and the use of cross-validation to estimate their performance. In particular, we investigate the impact of batch effects and differences in sample composition between batches on the accuracy of the classification performance estimate obtained via cross-validation. The focus on estimation bias is a main difference compared to previous studies, which have mostly focused on the predictive performance and how it relates to the presence of batch effects. DATA: We work on simulated data sets. To have realistic intensity distributions, we use real gene expression data as the basis for our simulation. Random samples from this expression matrix are selected and assigned to group 1 (e.g., 'control') or group 2 (e.g., 'treated'). We introduce batch effects and select some features to be differentially expressed between the two groups. We consider several scenarios for our study, most importantly different levels of confounding between groups and batch effects. METHODS: We focus on well-known classifiers: logistic regression, Support Vector Machines (SVM), k-nearest neighbors (kNN) and Random Forests (RF). Feature selection is performed with the Wilcoxon test or the lasso. Parameter tuning and feature selection, as well as the estimation of the prediction performance of each classifier, is performed within a nested cross-validation scheme. The estimated classification performance is then compared to what is obtained when applying the classifier to independent data.
Resumo:
This paper presents a case study that explores the advantages that can be derived from the use of a design support system during the design of wastewater treatment plants (WWTP). With this objective in mind a simplified but plausible WWTP design case study has been generated with KBDS, a computer-based support system that maintains a historical record of the design process. The study shows how, by employing such a historical record, it is possible to: (1) rank different design proposals responding to a design problem; (2) study the influence of changing the weight of the arguments used in the selection of the most adequate proposal; (3) take advantage of keywords to assist the designer in the search of specific items within the historical records; (4) evaluate automatically thecompliance of alternative design proposals with respect to the design objectives; (5) verify the validity of previous decisions after the modification of the current constraints or specifications; (6) re-use the design records when upgrading an existing WWTP or when designing similar facilities; (7) generate documentation of the decision making process; and (8) associate a variety of documents as annotations to any component in the design history. The paper also shows one possible future role of design support systems as they outgrow their current reactive role as repositories of historical information and start to proactively support the generation of new knowledge during the design process
Resumo:
Uncertainty quantification of petroleum reservoir models is one of the present challenges, which is usually approached with a wide range of geostatistical tools linked with statistical optimisation or/and inference algorithms. The paper considers a data driven approach in modelling uncertainty in spatial predictions. Proposed semi-supervised Support Vector Regression (SVR) model has demonstrated its capability to represent realistic features and describe stochastic variability and non-uniqueness of spatial properties. It is able to capture and preserve key spatial dependencies such as connectivity, which is often difficult to achieve with two-point geostatistical models. Semi-supervised SVR is designed to integrate various kinds of conditioning data and learn dependences from them. A stochastic semi-supervised SVR model is integrated into a Bayesian framework to quantify uncertainty with multiple models fitted to dynamic observations. The developed approach is illustrated with a reservoir case study. The resulting probabilistic production forecasts are described by uncertainty envelopes.
Resumo:
Lectio praecursoria
Resumo:
Background: Since the rate of histologically 'negative' appendices still ranges between 15 and 20%, appendicitis in 'borderline' cases remains a challenging disease. As previously described, cell adhesion molecule expression correlates with different stages of appendicitis. Therefore, it was of interest to determine whether the 'negative' appendix correlated with the absence of E-selectin or vascular cell adhesion molecule-1 (VCAM-1). Methods: Nineteen grossly normal appendices from a series of 120 appendectomy specimens from patients with suspected appendicitis were analysed in frozen sections for the expression of E-selectin and VCAM-1. As control, 5 normal appendices were stained. Results: This study showed a coexpression of E-selectin and VCAM-1 in endothelial cells in early and recurrent appendicitis. In patients with symptoms for less than 6 h, only E-selectin was detected. Cases with fibrosis and luminal obliteration were only positive for VCAM-1. In cases of early appendicitis with symptoms of less than 6 h duration, a discordance between histological and immunohistochemical results was found. Conclusions: This report indicates that E-selectin and VCAM-1 expression could be useful parameters in the diagnosis of appendicitis in borderline cases.
Resumo:
Introduction: Individuals with poor social determinants of health aremore likely to receive improper healthcare. Frequent Users (FUs) ofEmergency Departments (ED) (defined as >4 visits in the previous12 months) represent a subgroup of vulnerable patients presentingwith specific medical and social needs. They usually account for highhealthcare costs by overusing the healthcare system. In 2008-2009,FUs accounted for 4% of our ED patients but 17% of all our ED visits.Methods: We conducted a prospective cohort of patients admitted toour ED with vulnerabilities in ≥3 specific domains (somatic or mentaldiseases, risk behaviors, social determinants of health, and healthcareuse). Patients were either directly identified by a multidisciplinary team(two nurses, one social worker, one physician) or referred to that teamby the ED staff during opening hours from July 1st 2010 to April 30th2011.Results: 127 patients were included (67% males), aged 43 years (SD15); 65% were migrants. They had a median of 6 ED visits (interquartilerange (IQR) 8-1) in the previous 12 months, representing a total of 697visits. The most frequently affected domains during the index visit were:71% somatic, 61% psychiatric, 75% risk behaviors, 97% social and84% healthcare use issues. Each case required a median of 234minutes (IQR 300-90) dedicated to assess their outpatient network(99% of the patients), to set up an ambulatory medical follow-up (43%)or a meeting with social services (40%).Conclusions: Vulnerability affected ED patients in more than onedomain. Vulnerable patients have complex needs that were difficult toaddress in the time-pressured ED setting. Although ED consultationoffers immediate access to medical care, EDs are dedicated more foracute short-term somatic care. Caring for a growing number ofvulnerable patients requires a different type of management. Limitedevidence shows that multidisciplinary case-management interventionshave demonstrated positive outcomes in terms of reducing ED useand costs, and improvement of patient's medical and social outcomes.A randomized trial of case-management is underway to confirm theresults of observational studies.