982 resultados para Interior point algorithm
Resumo:
Les POCT (point of care tests) ont un grand potentiel d'utilisation en médecine infectieuse ambulatoire grâce à leur rapidité d'exécution, leur impact sur l'administration d'antibiotiques et sur le diagnostic de certaines maladies transmissibles. Certains tests sont utilisés depuis plusieurs années (détection de Streptococcus pyogenes lors d'angine, anticorps anti-VIH, antigène urinaire de S. pneumoniae, antigène de Plasmodium falciparum). De nouvelles indications concernent les infections respiratoires, les diarrhées infantiles (rotavirus, E. coli entérohémorragique) et les infections sexuellement transmissibles. Des POCT, basés sur la détection d'acides nucléiques, viennent d'être introduits (streptocoque du groupe B chez la femme enceinte avant l'accouchement et la détection du portage de staphylocoque doré résistant à la méticilline). POCT have a great potential in ambulatory infectious diseases diagnosis, due to their impact on antibiotic administration and on communicable diseases prevention. Some are in use for long (S. pyogenes antigen, HIV antibodies) or short time (S. pneumoniae antigen, P. falciparum). The additional major indications will be community-acquired lower respiratory tract infections, infectious diarrhoea in children (rotavirus, enterotoxigenic E. coli), and hopefully sexually transmitted infections. Easy to use, these tests based on antigen-antibody reaction allow a rapid diagnosis in less than one hour; the new generation of POCT relying on nucleic acid detection are just introduced in practice (detection of GBS in pregnant women, carriage of MRSA), and will be extended to many pathogens
Resumo:
A stochastic nonlinear partial differential equation is constructed for two different models exhibiting self-organized criticality: the Bak-Tang-Wiesenfeld (BTW) sandpile model [Phys. Rev. Lett. 59, 381 (1987); Phys. Rev. A 38, 364 (1988)] and the Zhang model [Phys. Rev. Lett. 63, 470 (1989)]. The dynamic renormalization group (DRG) enables one to compute the critical exponents. However, the nontrivial stable fixed point of the DRG transformation is unreachable for the original parameters of the models. We introduce an alternative regularization of the step function involved in the threshold condition, which breaks the symmetry of the BTW model. Although the symmetry properties of the two models are different, it is shown that they both belong to the same universality class. In this case the DRG procedure leads to a symmetric behavior for both models, restoring the broken symmetry, and makes accessible the nontrivial fixed point. This technique could also be applied to other problems with threshold dynamics.
Resumo:
Cortical folding (gyrification) is determined during the first months of life, so that adverse events occurring during this period leave traces that will be identifiable at any age. As recently reviewed by Mangin and colleagues(2), several methods exist to quantify different characteristics of gyrification. For instance, sulcal morphometry can be used to measure shape descriptors such as the depth, length or indices of inter-hemispheric asymmetry(3). These geometrical properties have the advantage of being easy to interpret. However, sulcal morphometry tightly relies on the accurate identification of a given set of sulci and hence provides a fragmented description of gyrification. A more fine-grained quantification of gyrification can be achieved with curvature-based measurements, where smoothed absolute mean curvature is typically computed at thousands of points over the cortical surface(4). The curvature is however not straightforward to comprehend, as it remains unclear if there is any direct relationship between the curvedness and a biologically meaningful correlate such as cortical volume or surface. To address the diverse issues raised by the measurement of cortical folding, we previously developed an algorithm to quantify local gyrification with an exquisite spatial resolution and of simple interpretation. Our method is inspired of the Gyrification Index(5), a method originally used in comparative neuroanatomy to evaluate the cortical folding differences across species. In our implementation, which we name local Gyrification Index (lGI(1)), we measure the amount of cortex buried within the sulcal folds as compared with the amount of visible cortex in circular regions of interest. Given that the cortex grows primarily through radial expansion(6), our method was specifically designed to identify early defects of cortical development. In this article, we detail the computation of local Gyrification Index, which is now freely distributed as a part of the FreeSurfer Software (http://surfer.nmr.mgh.harvard.edu/, Martinos Center for Biomedical Imaging, Massachusetts General Hospital). FreeSurfer provides a set of automated reconstruction tools of the brain's cortical surface from structural MRI data. The cortical surface extracted in the native space of the images with sub-millimeter accuracy is then further used for the creation of an outer surface, which will serve as a basis for the lGI calculation. A circular region of interest is then delineated on the outer surface, and its corresponding region of interest on the cortical surface is identified using a matching algorithm as described in our validation study(1). This process is repeatedly iterated with largely overlapping regions of interest, resulting in cortical maps of gyrification for subsequent statistical comparisons (Fig. 1). Of note, another measurement of local gyrification with a similar inspiration was proposed by Toro and colleagues(7), where the folding index at each point is computed as the ratio of the cortical area contained in a sphere divided by the area of a disc with the same radius. The two implementations differ in that the one by Toro et al. is based on Euclidian distances and thus considers discontinuous patches of cortical area, whereas ours uses a strict geodesic algorithm and include only the continuous patch of cortical area opening at the brain surface in a circular region of interest.
Resumo:
Data sheet produced by the Iowa Department of Natural Resources is about different times of animals, insects, snakes, birds, fish, butterflies, etc. that can be found in Iowa.
Resumo:
The front form and the point form of dynamics are studied in the framework of predictive relativistic mechanics. The non-interaction theorem is proved when a Poincar-invariant Hamiltonian formulation with canonical position coordinates is required.
Resumo:
The liquid-liquid critical point scenario of water hypothesizes the existence of two metastable liq- uid phases low-density liquid (LDL) and high-density liquid (HDL) deep within the supercooled region. The hypothesis originates from computer simulations of the ST2 water model, but the stabil- ity of the LDL phase with respect to the crystal is still being debated. We simulate supercooled ST2 water at constant pressure, constant temperature, and constant number of molecules N for N ≤ 729 and times up to 1 μs. We observe clear differences between the two liquids, both structural and dynamical. Using several methods, including finite-size scaling, we confirm the presence of a liquid-liquid phase transition ending in a critical point. We find that the LDL is stable with respect to the crystal in 98% of our runs (we perform 372 runs for LDL or LDL-like states), and in 100% of our runs for the two largest system sizes (N = 512 and 729, for which we perform 136 runs for LDL or LDL-like states). In all these runs, tiny crystallites grow and then melt within 1 μs. Only for N ≤ 343 we observe six events (over 236 runs for LDL or LDL-like states) of spontaneous crystal- lization after crystallites reach an estimated critical size of about 70 ± 10 molecules.
Resumo:
Iowa has the same problem that confronts most states in the United States: many bridges constructed more than 20 years ago either have deteriorated to the point that they are inadequate for original design loads or have been rendered inadequate by changes in design/maintenance standards or design loads. Inadequate bridges require either strengthening or posting for reduced loads. A sizeable number of single span, composite concrete deck - steel I beam bridges in Iowa currently cannot be rated to carry today's design loads. Various methods for strengthening the unsafe bridges have been proposed and some methods have been tried. No method appears to be as economical and promising as strengthening by post-tensioning of the steel beams. At the time this research study was begun, the feasibility of posttensioning existing composite bridges was unknown. As one would expect, the design of a bridge-strengthening scheme utilizing post-tensioning is quite complex. The design involves composite construction stressed in an abnormal manner (possible tension in the deck slab), consideration of different sizes of exterior and interior beams, cover-plated beams already designed for maximum moment at midspan and at plate cut-off points, complex live load distribution, and distribution of post-tensioningforces and moments among the bridge beams. Although information is available on many of these topics, there is miminal information on several of them and no information available on the total design problem. This study, therefore, is an effort to gather some of the missing information, primarily through testing a half-size bridge model and thus determining the feasibility of strengthening composite bridges by post-tensioning. Based on the results of this study, the authors anticipate that a second phase of the study will be undertaken and directed toward strengthening of one or more prototype bridges in Iowa.
Resumo:
Abstract : This work is concerned with the development and application of novel unsupervised learning methods, having in mind two target applications: the analysis of forensic case data and the classification of remote sensing images. First, a method based on a symbolic optimization of the inter-sample distance measure is proposed to improve the flexibility of spectral clustering algorithms, and applied to the problem of forensic case data. This distance is optimized using a loss function related to the preservation of neighborhood structure between the input space and the space of principal components, and solutions are found using genetic programming. Results are compared to a variety of state-of--the-art clustering algorithms. Subsequently, a new large-scale clustering method based on a joint optimization of feature extraction and classification is proposed and applied to various databases, including two hyperspectral remote sensing images. The algorithm makes uses of a functional model (e.g., a neural network) for clustering which is trained by stochastic gradient descent. Results indicate that such a technique can easily scale to huge databases, can avoid the so-called out-of-sample problem, and can compete with or even outperform existing clustering algorithms on both artificial data and real remote sensing images. This is verified on small databases as well as very large problems. Résumé : Ce travail de recherche porte sur le développement et l'application de méthodes d'apprentissage dites non supervisées. Les applications visées par ces méthodes sont l'analyse de données forensiques et la classification d'images hyperspectrales en télédétection. Dans un premier temps, une méthodologie de classification non supervisée fondée sur l'optimisation symbolique d'une mesure de distance inter-échantillons est proposée. Cette mesure est obtenue en optimisant une fonction de coût reliée à la préservation de la structure de voisinage d'un point entre l'espace des variables initiales et l'espace des composantes principales. Cette méthode est appliquée à l'analyse de données forensiques et comparée à un éventail de méthodes déjà existantes. En second lieu, une méthode fondée sur une optimisation conjointe des tâches de sélection de variables et de classification est implémentée dans un réseau de neurones et appliquée à diverses bases de données, dont deux images hyperspectrales. Le réseau de neurones est entraîné à l'aide d'un algorithme de gradient stochastique, ce qui rend cette technique applicable à des images de très haute résolution. Les résultats de l'application de cette dernière montrent que l'utilisation d'une telle technique permet de classifier de très grandes bases de données sans difficulté et donne des résultats avantageusement comparables aux méthodes existantes.
Resumo:
Enfants de moins de 10 ans fumant passivement 14 cigarettes ! D'avril 2010 à avril 2011, l'exposition de 148 enfants (81 garçons et 67 filles) a été testée: 10 enfants de moins d'un an, 25 de 1 à 5 ans, 19 de 5 à 10 ans, 30 de 10 à 15 ans et 64 de 15 à 18 ans. 10 d'entre eux sont des fumeurs et la plus jeune de 14 ans fume 10 cigarettes par jour. Leurs parents, ou parfois des jeunes eux-mêmes, ont commandé de manière volontaire, via les sites Internet des CIPRET Valais, Vaud et Genève, un badge MoNIC gratuit. Les résultats quant à l'exposition de ces enfants interpellent et méritent l'attention.Pour l'ensemble des enfants, la concentration moyenne de nicotine dans leur environnement intérieur mesurée via les dispositifs MoNIC est de 0,5 mg/m3, avec des maximums pouvant aller jusqu'à 21 mg/m3. Pour le collectif d'enfants âgés de moins de 10 ans (26 garçons et 28 filles; tous non-fumeurs), la concentration de nicotine n'est pas négligeable (moyenne 0,069 mg/m3, min 0, max 0,583 mg/m3). En convertissant ce résultat en équivalent de cigarettes inhalées passivement, nous obtenons des chiffres allant de 0 à 14 cigarettes par jour* avec une moyenne se situant à 1.6 cig/j. Encore plus surprenant, les enfants de moins d'un an (4 garçons et 6 filles) inhalent passivement, dans le cadre familial, en moyenne 1 cigarette (min 0, max 2.2). Pour les deux autres collectifs: 10-15 ans et 15-18 ans, les valeurs maximales avoisinent les 22 cigarettes. Notons cependant que ce résultat est influencé, ce qui n'est pas le cas des enfants plus jeunes, par le fait que ces jeunes sont également parfois des fumeurs actifs.* Quand la durée d'exposition dépassait 1 jour (8 heures), le nombre d'heures a toujours été divisé par 8 heures. Le résultat obtenu donne l'équivalent de cigarettes fumées passivement en huit heures. Il s'agit de ce fait d'une moyenne, ce qui veut dire que durant cette période les enfants ont pu être exposés irrégulièrement à des valeurs supérieures ou inférieures à cette moyenne. [Auteurs]
Resumo:
O objetivo deste artigo é buscar subsídios para uma melhor compreensão da situação atual de menor status acadêmico das licenciaturas nas universidades brasileiras e das conseqüentes dificuldades enfrentadas por esses cursos para implementação de mudanças significativas. São analisados resultados de uma investigação sócio-histórica realizada no curso de Ciências Biológicas da Universidade Federal de Minas Gerais - UFMG. A história desse campo, utilizada neste trabalho como um estudo de caso, revela pontos que poderão contribuir para um melhor entendimento dessa condição de menor prestígio acadêmico dos cursos de formação docente nas instituições de ensino superior brasileiras.
Resumo:
RÉSUMÉ Cette thèse porte sur le développement de méthodes algorithmiques pour découvrir automatiquement la structure morphologique des mots d'un corpus. On considère en particulier le cas des langues s'approchant du type introflexionnel, comme l'arabe ou l'hébreu. La tradition linguistique décrit la morphologie de ces langues en termes d'unités discontinues : les racines consonantiques et les schèmes vocaliques. Ce genre de structure constitue un défi pour les systèmes actuels d'apprentissage automatique, qui opèrent généralement avec des unités continues. La stratégie adoptée ici consiste à traiter le problème comme une séquence de deux sous-problèmes. Le premier est d'ordre phonologique : il s'agit de diviser les symboles (phonèmes, lettres) du corpus en deux groupes correspondant autant que possible aux consonnes et voyelles phonétiques. Le second est de nature morphologique et repose sur les résultats du premier : il s'agit d'établir l'inventaire des racines et schèmes du corpus et de déterminer leurs règles de combinaison. On examine la portée et les limites d'une approche basée sur deux hypothèses : (i) la distinction entre consonnes et voyelles peut être inférée sur la base de leur tendance à alterner dans la chaîne parlée; (ii) les racines et les schèmes peuvent être identifiés respectivement aux séquences de consonnes et voyelles découvertes précédemment. L'algorithme proposé utilise une méthode purement distributionnelle pour partitionner les symboles du corpus. Puis il applique des principes analogiques pour identifier un ensemble de candidats sérieux au titre de racine ou de schème, et pour élargir progressivement cet ensemble. Cette extension est soumise à une procédure d'évaluation basée sur le principe de la longueur de description minimale, dans- l'esprit de LINGUISTICA (Goldsmith, 2001). L'algorithme est implémenté sous la forme d'un programme informatique nommé ARABICA, et évalué sur un corpus de noms arabes, du point de vue de sa capacité à décrire le système du pluriel. Cette étude montre que des structures linguistiques complexes peuvent être découvertes en ne faisant qu'un minimum d'hypothèses a priori sur les phénomènes considérés. Elle illustre la synergie possible entre des mécanismes d'apprentissage portant sur des niveaux de description linguistique distincts, et cherche à déterminer quand et pourquoi cette coopération échoue. Elle conclut que la tension entre l'universalité de la distinction consonnes-voyelles et la spécificité de la structuration racine-schème est cruciale pour expliquer les forces et les faiblesses d'une telle approche. ABSTRACT This dissertation is concerned with the development of algorithmic methods for the unsupervised learning of natural language morphology, using a symbolically transcribed wordlist. It focuses on the case of languages approaching the introflectional type, such as Arabic or Hebrew. The morphology of such languages is traditionally described in terms of discontinuous units: consonantal roots and vocalic patterns. Inferring this kind of structure is a challenging task for current unsupervised learning systems, which generally operate with continuous units. In this study, the problem of learning root-and-pattern morphology is divided into a phonological and a morphological subproblem. The phonological component of the analysis seeks to partition the symbols of a corpus (phonemes, letters) into two subsets that correspond well with the phonetic definition of consonants and vowels; building around this result, the morphological component attempts to establish the list of roots and patterns in the corpus, and to infer the rules that govern their combinations. We assess the extent to which this can be done on the basis of two hypotheses: (i) the distinction between consonants and vowels can be learned by observing their tendency to alternate in speech; (ii) roots and patterns can be identified as sequences of the previously discovered consonants and vowels respectively. The proposed algorithm uses a purely distributional method for partitioning symbols. Then it applies analogical principles to identify a preliminary set of reliable roots and patterns, and gradually enlarge it. This extension process is guided by an evaluation procedure based on the minimum description length principle, in line with the approach to morphological learning embodied in LINGUISTICA (Goldsmith, 2001). The algorithm is implemented as a computer program named ARABICA; it is evaluated with regard to its ability to account for the system of plural formation in a corpus of Arabic nouns. This thesis shows that complex linguistic structures can be discovered without recourse to a rich set of a priori hypotheses about the phenomena under consideration. It illustrates the possible synergy between learning mechanisms operating at distinct levels of linguistic description, and attempts to determine where and why such a cooperation fails. It concludes that the tension between the universality of the consonant-vowel distinction and the specificity of root-and-pattern structure is crucial for understanding the advantages and weaknesses of this approach.
Resumo:
Chronic hepatitis C is a major healthcare problem. The response to antiviral therapy for patients with chronic hepatitis C has previously been defined biochemically and by PCR. However, changes in the hepatic venous pressure gradient (HVPG) may be considered as an adjunctive end point for the therapeutic evaluation of antiviral therapy in chronic hepatitis C. It is a validated technique which is safe, well tolerated, well established, and reproducible. Serial HVPG measurements may be the best way to evaluate response to therapy in chronic hepatitis C.
Resumo:
Les infections liées aux accès vasculaires sont l'une des causes principales des infections nosocomiales. Elles englobent leur colonisation par des micro-organismes, les infections du site d'insertion et les bactériémies et fongémies qui leur sont attribuées. Une bactériémie complique en moyenne 3 à 5 voies veineuses sur 100, ou représente de 2 à 14 épisodes pour 1000 jour-cathéters. Cette proportion n'est que la partie visible de l'iceberg puisque la plupart des épisodes de sepsis clinique sans foyer infectieux apparent associé sont actuellement considérés comme secondaires aux accès vasculaires. Les principes thérapeutiques sont présentés après une brève revue de leur physiopathologie. Plusieurs approches préventives sont ensuite discutées, y compris des éléments récents sur l'utilisation de cathéters imprégnés de désinfectants ou d'antibiotiques.