980 resultados para Proximal Point Algorithm


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Objective: To implement a carotid sparing protocol using helical Tomotherapy(HT) in T1N0 squamous-cell laryngeal carcinoma.Materials/Methods: Between July and August 2010, 7 men with stage T1N0 laryngeal carcinoma were included in this study. Age ranged from 47-74 years. Staging included endoscopic examination, CT-scan and MRI when indicated.Planned irradiation dose was 70 Gy in 35 fractions over 7 weeks. A simple treatment planning algorithm for carotidsparing was used: maximum point dose to the carotids 35 Gy, to the spinal cord 30 Gy, and 100% PTV volume to becovered with 95% of the prescribed dose. Carotid volume of interest extended to 1 cm above and below of the PTV.Doses to the carotid arteries, critical organs, and planned target volume (PTV) with our standard laryngealirradiation protocol was compared. Daily megavoltage scans were obtained before each fraction. When necessary, thePlanned Adaptive? software (TomoTherapy Inc., Madison, WI) was used to evaluate the need for a re-planning,which has never been indicated. Dose data were extracted using the VelocityAI software (Atlanta, GA), and datanormalization and dosevolume histogram (DVH) interpolation were realized using the Igor Pro software (Portland,OR).Results: A significant (p < 0.05) carotid dose sparing compared to our standard protocol with an average maximum point dose of 38.3 Gy (standard devaition [SD] 4.05 Gy), average mean dose of 18.59 Gy (SD 0.83 Gy) was achieved.In all patients, 95% of the carotid volume received less than 28.4 Gy (SD 0.98 Gy). The average maximum point doseto the spinal cord was 25.8 Gy (SD 3.24 Gy). PTV was fully covered with more than 95% of the prescribed dose forall patients with an average maximum point dose of 74.1 Gy and the absolute maximum dose in a single patient of75.2 Gy. To date, the clinical outcomes have been excellent. Three patients (42%) developed stage 1 mucositis that was conservatively managed, and all the patients presented a mild to moderate dysphonia. All adverse effectsresolved spontaneously in the month following the end of treatment. Early local control rate is 100% considering a 4-5months post treatment follow-up.Conclusions: HT allows a clinically significant decrease of carotid irradiation dose compared tostandard irradiation protocols with an acceptable spinal cord dose tradeoff. Moreover, this technique allows the PTV to be homogenously covered with a curative irradiation dose. Daily control imaging brings added security marginsespecially when working with high dose gradients. Further investigations and follow-up are underway to better evaluatethe late clinical outcomes especially the local control rate, late laryngeal and vascular toxicity, and expected potentialimpact on cerebrovascular events.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We herein present a preliminary practical algorithm for evaluating complementary and alternative medicine (CAM) for children which relies on basic bioethical principles and considers the influence of CAM on global child healthcare. CAM is currently involved in almost all sectors of pediatric care and frequently represents a challenge to the pediatrician. The aim of this article is to provide a decision-making tool to assist the physician, especially as it remains difficult to keep up-to-date with the latest developments in the field. The reasonable application of our algorithm together with common sense should enable the pediatrician to decide whether pediatric (P)-CAM represents potential harm to the patient, and allow ethically sound counseling. In conclusion, we propose a pragmatic algorithm designed to evaluate P-CAM, briefly explain the underlying rationale and give a concrete clinical example.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background In patients presenting with acute cardiac symptoms, abnormal ECG and raised troponin, myocarditis may be suspected after normal angiography. Aims To analyse cardiac magnetic resonance (CMR) findings in patients with a provisional diagnosis of acute coronary syndrome (ACS) in whom acute myocarditis was subsequently considered more likely. Methods and results 79 patients referred for CMR following an admission with presumed ACS and raised serum troponin in whom no culprit lesion was detected were studied. 13% had unrecognised myocardial infarction and 6% takotsubo cardiomyopathy. The remainder (81%) were diagnosed with myocarditis. Mean age was 45615 years and 70% were male. Left ventricular ejection fraction (EF) was 58610%; myocardial oedema was detected in 58%. A myocarditic pattern of late gadolinium enhancement (LGE) was detected in 92%. Abnormalities were detected more frequently in scans performed within 2 weeks of symptom onset: oedema in 81% vs 11% (p<0.0005), and LGE in 100% vs 76% (p<0.005). In 20 patients with both an acute (<2 weeks) and convalescent scan (>3 weeks), oedema decreased from 84% to 39% (p<0.01) and LGE from 5.6 to 3.0 segments (p¼0.005). Three patients presented with sustained ventricular tachycardia, another died suddenly 4 days after admission and one resuscitated 7 weeks following presentation. All 5 patients had preserved EF. Conclusions Our study emphasises the importance of access to CMR for heart attack centres. If myocarditis is suspected, CMR scanning should be performed within 14 days. Myocarditis should not be regarded as benign, even when EF is preserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present a numerical method for spectroscopic ellipsometry of thick transparent films. When an analytical expression for the dispersion of the refractive index (which contains several unknown coefficients) is assumed, the procedure is based on fitting the coefficients at a fixed thickness. Then the thickness is varied within a range (according to its approximate value). The final result given by our method is as follows: The sample thickness is considered to be the one that gives the best fitting. The refractive index is defined by the coefficients obtained for this thickness.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Previous studies support resorbable biocomposites made of poly(L-lactic acid) (PLA) and beta-tricalcium phosphate (TCP) produced by supercritical gas foaming as a suitable scaffold for tissue engineering. The present study was undertaken to demonstrate the biocompatibility and osteoconductive properties of such a scaffold in a large animal cancellous bone model. The biocomposite (PLA/TCP) was compared with a currently used beta-TCP bone substitute (ChronOS, Dr. Robert Mathys Foundation), representing a positive control, and empty defects, representing a negative control. Ten defects were created in sheep cancellous bone, three in the distal femur and two in the proximal tibia of each hind limb, with diameters of 5 mm and depths of 15 mm. New bone in-growth (osteoconductivity) and biocompatibility were evaluated using microcomputed tomography and histology at 2, 4 and 12 months after surgery. The in vivo study was validated by the positive control (good bone formation with ChronOS) and the negative control (no healing with the empty defect). A major finding of this study was incorporation of the biocomposite in bone after 12 months. Bone in-growth was observed in the biocomposite scaffold, including its central part. Despite initial fibrous tissue formation observed at 2 and 4 months, but not at 12 months, this initial fibrous tissue does not preclude long-term application of the biocomposite, as demonstrated by its osteointegration after 12 months, as well as the absence of chronic or long-term inflammation at this time point.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Les POCT (point of care tests) ont un grand potentiel d'utilisation en médecine infectieuse ambulatoire grâce à leur rapidité d'exécution, leur impact sur l'administration d'antibiotiques et sur le diagnostic de certaines maladies transmissibles. Certains tests sont utilisés depuis plusieurs années (détection de Streptococcus pyogenes lors d'angine, anticorps anti-VIH, antigène urinaire de S. pneumoniae, antigène de Plasmodium falciparum). De nouvelles indications concernent les infections respiratoires, les diarrhées infantiles (rotavirus, E. coli entérohémorragique) et les infections sexuellement transmissibles. Des POCT, basés sur la détection d'acides nucléiques, viennent d'être introduits (streptocoque du groupe B chez la femme enceinte avant l'accouchement et la détection du portage de staphylocoque doré résistant à la méticilline). POCT have a great potential in ambulatory infectious diseases diagnosis, due to their impact on antibiotic administration and on communicable diseases prevention. Some are in use for long (S. pyogenes antigen, HIV antibodies) or short time (S. pneumoniae antigen, P. falciparum). The additional major indications will be community-acquired lower respiratory tract infections, infectious diarrhoea in children (rotavirus, enterotoxigenic E. coli), and hopefully sexually transmitted infections. Easy to use, these tests based on antigen-antibody reaction allow a rapid diagnosis in less than one hour; the new generation of POCT relying on nucleic acid detection are just introduced in practice (detection of GBS in pregnant women, carriage of MRSA), and will be extended to many pathogens

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A stochastic nonlinear partial differential equation is constructed for two different models exhibiting self-organized criticality: the Bak-Tang-Wiesenfeld (BTW) sandpile model [Phys. Rev. Lett. 59, 381 (1987); Phys. Rev. A 38, 364 (1988)] and the Zhang model [Phys. Rev. Lett. 63, 470 (1989)]. The dynamic renormalization group (DRG) enables one to compute the critical exponents. However, the nontrivial stable fixed point of the DRG transformation is unreachable for the original parameters of the models. We introduce an alternative regularization of the step function involved in the threshold condition, which breaks the symmetry of the BTW model. Although the symmetry properties of the two models are different, it is shown that they both belong to the same universality class. In this case the DRG procedure leads to a symmetric behavior for both models, restoring the broken symmetry, and makes accessible the nontrivial fixed point. This technique could also be applied to other problems with threshold dynamics.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Cortical folding (gyrification) is determined during the first months of life, so that adverse events occurring during this period leave traces that will be identifiable at any age. As recently reviewed by Mangin and colleagues(2), several methods exist to quantify different characteristics of gyrification. For instance, sulcal morphometry can be used to measure shape descriptors such as the depth, length or indices of inter-hemispheric asymmetry(3). These geometrical properties have the advantage of being easy to interpret. However, sulcal morphometry tightly relies on the accurate identification of a given set of sulci and hence provides a fragmented description of gyrification. A more fine-grained quantification of gyrification can be achieved with curvature-based measurements, where smoothed absolute mean curvature is typically computed at thousands of points over the cortical surface(4). The curvature is however not straightforward to comprehend, as it remains unclear if there is any direct relationship between the curvedness and a biologically meaningful correlate such as cortical volume or surface. To address the diverse issues raised by the measurement of cortical folding, we previously developed an algorithm to quantify local gyrification with an exquisite spatial resolution and of simple interpretation. Our method is inspired of the Gyrification Index(5), a method originally used in comparative neuroanatomy to evaluate the cortical folding differences across species. In our implementation, which we name local Gyrification Index (lGI(1)), we measure the amount of cortex buried within the sulcal folds as compared with the amount of visible cortex in circular regions of interest. Given that the cortex grows primarily through radial expansion(6), our method was specifically designed to identify early defects of cortical development. In this article, we detail the computation of local Gyrification Index, which is now freely distributed as a part of the FreeSurfer Software (http://surfer.nmr.mgh.harvard.edu/, Martinos Center for Biomedical Imaging, Massachusetts General Hospital). FreeSurfer provides a set of automated reconstruction tools of the brain's cortical surface from structural MRI data. The cortical surface extracted in the native space of the images with sub-millimeter accuracy is then further used for the creation of an outer surface, which will serve as a basis for the lGI calculation. A circular region of interest is then delineated on the outer surface, and its corresponding region of interest on the cortical surface is identified using a matching algorithm as described in our validation study(1). This process is repeatedly iterated with largely overlapping regions of interest, resulting in cortical maps of gyrification for subsequent statistical comparisons (Fig. 1). Of note, another measurement of local gyrification with a similar inspiration was proposed by Toro and colleagues(7), where the folding index at each point is computed as the ratio of the cortical area contained in a sphere divided by the area of a disc with the same radius. The two implementations differ in that the one by Toro et al. is based on Euclidian distances and thus considers discontinuous patches of cortical area, whereas ours uses a strict geodesic algorithm and include only the continuous patch of cortical area opening at the brain surface in a circular region of interest.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The front form and the point form of dynamics are studied in the framework of predictive relativistic mechanics. The non-interaction theorem is proved when a Poincar-invariant Hamiltonian formulation with canonical position coordinates is required.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The liquid-liquid critical point scenario of water hypothesizes the existence of two metastable liq- uid phases low-density liquid (LDL) and high-density liquid (HDL) deep within the supercooled region. The hypothesis originates from computer simulations of the ST2 water model, but the stabil- ity of the LDL phase with respect to the crystal is still being debated. We simulate supercooled ST2 water at constant pressure, constant temperature, and constant number of molecules N for N ≤ 729 and times up to 1 μs. We observe clear differences between the two liquids, both structural and dynamical. Using several methods, including finite-size scaling, we confirm the presence of a liquid-liquid phase transition ending in a critical point. We find that the LDL is stable with respect to the crystal in 98% of our runs (we perform 372 runs for LDL or LDL-like states), and in 100% of our runs for the two largest system sizes (N = 512 and 729, for which we perform 136 runs for LDL or LDL-like states). In all these runs, tiny crystallites grow and then melt within 1 μs. Only for N ≤ 343 we observe six events (over 236 runs for LDL or LDL-like states) of spontaneous crystal- lization after crystallites reach an estimated critical size of about 70 ± 10 molecules.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Abstract : This work is concerned with the development and application of novel unsupervised learning methods, having in mind two target applications: the analysis of forensic case data and the classification of remote sensing images. First, a method based on a symbolic optimization of the inter-sample distance measure is proposed to improve the flexibility of spectral clustering algorithms, and applied to the problem of forensic case data. This distance is optimized using a loss function related to the preservation of neighborhood structure between the input space and the space of principal components, and solutions are found using genetic programming. Results are compared to a variety of state-of--the-art clustering algorithms. Subsequently, a new large-scale clustering method based on a joint optimization of feature extraction and classification is proposed and applied to various databases, including two hyperspectral remote sensing images. The algorithm makes uses of a functional model (e.g., a neural network) for clustering which is trained by stochastic gradient descent. Results indicate that such a technique can easily scale to huge databases, can avoid the so-called out-of-sample problem, and can compete with or even outperform existing clustering algorithms on both artificial data and real remote sensing images. This is verified on small databases as well as very large problems. Résumé : Ce travail de recherche porte sur le développement et l'application de méthodes d'apprentissage dites non supervisées. Les applications visées par ces méthodes sont l'analyse de données forensiques et la classification d'images hyperspectrales en télédétection. Dans un premier temps, une méthodologie de classification non supervisée fondée sur l'optimisation symbolique d'une mesure de distance inter-échantillons est proposée. Cette mesure est obtenue en optimisant une fonction de coût reliée à la préservation de la structure de voisinage d'un point entre l'espace des variables initiales et l'espace des composantes principales. Cette méthode est appliquée à l'analyse de données forensiques et comparée à un éventail de méthodes déjà existantes. En second lieu, une méthode fondée sur une optimisation conjointe des tâches de sélection de variables et de classification est implémentée dans un réseau de neurones et appliquée à diverses bases de données, dont deux images hyperspectrales. Le réseau de neurones est entraîné à l'aide d'un algorithme de gradient stochastique, ce qui rend cette technique applicable à des images de très haute résolution. Les résultats de l'application de cette dernière montrent que l'utilisation d'une telle technique permet de classifier de très grandes bases de données sans difficulté et donne des résultats avantageusement comparables aux méthodes existantes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Enfants de moins de 10 ans fumant passivement 14 cigarettes ! D'avril 2010 à avril 2011, l'exposition de 148 enfants (81 garçons et 67 filles) a été testée: 10 enfants de moins d'un an, 25 de 1 à 5 ans, 19 de 5 à 10 ans, 30 de 10 à 15 ans et 64 de 15 à 18 ans. 10 d'entre eux sont des fumeurs et la plus jeune de 14 ans fume 10 cigarettes par jour. Leurs parents, ou parfois des jeunes eux-mêmes, ont commandé de manière volontaire, via les sites Internet des CIPRET Valais, Vaud et Genève, un badge MoNIC gratuit. Les résultats quant à l'exposition de ces enfants interpellent et méritent l'attention.Pour l'ensemble des enfants, la concentration moyenne de nicotine dans leur environnement intérieur mesurée via les dispositifs MoNIC est de 0,5 mg/m3, avec des maximums pouvant aller jusqu'à 21 mg/m3. Pour le collectif d'enfants âgés de moins de 10 ans (26 garçons et 28 filles; tous non-fumeurs), la concentration de nicotine n'est pas négligeable (moyenne 0,069 mg/m3, min 0, max 0,583 mg/m3). En convertissant ce résultat en équivalent de cigarettes inhalées passivement, nous obtenons des chiffres allant de 0 à 14 cigarettes par jour* avec une moyenne se situant à 1.6 cig/j. Encore plus surprenant, les enfants de moins d'un an (4 garçons et 6 filles) inhalent passivement, dans le cadre familial, en moyenne 1 cigarette (min 0, max 2.2). Pour les deux autres collectifs: 10-15 ans et 15-18 ans, les valeurs maximales avoisinent les 22 cigarettes. Notons cependant que ce résultat est influencé, ce qui n'est pas le cas des enfants plus jeunes, par le fait que ces jeunes sont également parfois des fumeurs actifs.* Quand la durée d'exposition dépassait 1 jour (8 heures), le nombre d'heures a toujours été divisé par 8 heures. Le résultat obtenu donne l'équivalent de cigarettes fumées passivement en huit heures. Il s'agit de ce fait d'une moyenne, ce qui veut dire que durant cette période les enfants ont pu être exposés irrégulièrement à des valeurs supérieures ou inférieures à cette moyenne. [Auteurs]

Relevância:

20.00% 20.00%

Publicador:

Resumo:

RÉSUMÉ Cette thèse porte sur le développement de méthodes algorithmiques pour découvrir automatiquement la structure morphologique des mots d'un corpus. On considère en particulier le cas des langues s'approchant du type introflexionnel, comme l'arabe ou l'hébreu. La tradition linguistique décrit la morphologie de ces langues en termes d'unités discontinues : les racines consonantiques et les schèmes vocaliques. Ce genre de structure constitue un défi pour les systèmes actuels d'apprentissage automatique, qui opèrent généralement avec des unités continues. La stratégie adoptée ici consiste à traiter le problème comme une séquence de deux sous-problèmes. Le premier est d'ordre phonologique : il s'agit de diviser les symboles (phonèmes, lettres) du corpus en deux groupes correspondant autant que possible aux consonnes et voyelles phonétiques. Le second est de nature morphologique et repose sur les résultats du premier : il s'agit d'établir l'inventaire des racines et schèmes du corpus et de déterminer leurs règles de combinaison. On examine la portée et les limites d'une approche basée sur deux hypothèses : (i) la distinction entre consonnes et voyelles peut être inférée sur la base de leur tendance à alterner dans la chaîne parlée; (ii) les racines et les schèmes peuvent être identifiés respectivement aux séquences de consonnes et voyelles découvertes précédemment. L'algorithme proposé utilise une méthode purement distributionnelle pour partitionner les symboles du corpus. Puis il applique des principes analogiques pour identifier un ensemble de candidats sérieux au titre de racine ou de schème, et pour élargir progressivement cet ensemble. Cette extension est soumise à une procédure d'évaluation basée sur le principe de la longueur de description minimale, dans- l'esprit de LINGUISTICA (Goldsmith, 2001). L'algorithme est implémenté sous la forme d'un programme informatique nommé ARABICA, et évalué sur un corpus de noms arabes, du point de vue de sa capacité à décrire le système du pluriel. Cette étude montre que des structures linguistiques complexes peuvent être découvertes en ne faisant qu'un minimum d'hypothèses a priori sur les phénomènes considérés. Elle illustre la synergie possible entre des mécanismes d'apprentissage portant sur des niveaux de description linguistique distincts, et cherche à déterminer quand et pourquoi cette coopération échoue. Elle conclut que la tension entre l'universalité de la distinction consonnes-voyelles et la spécificité de la structuration racine-schème est cruciale pour expliquer les forces et les faiblesses d'une telle approche. ABSTRACT This dissertation is concerned with the development of algorithmic methods for the unsupervised learning of natural language morphology, using a symbolically transcribed wordlist. It focuses on the case of languages approaching the introflectional type, such as Arabic or Hebrew. The morphology of such languages is traditionally described in terms of discontinuous units: consonantal roots and vocalic patterns. Inferring this kind of structure is a challenging task for current unsupervised learning systems, which generally operate with continuous units. In this study, the problem of learning root-and-pattern morphology is divided into a phonological and a morphological subproblem. The phonological component of the analysis seeks to partition the symbols of a corpus (phonemes, letters) into two subsets that correspond well with the phonetic definition of consonants and vowels; building around this result, the morphological component attempts to establish the list of roots and patterns in the corpus, and to infer the rules that govern their combinations. We assess the extent to which this can be done on the basis of two hypotheses: (i) the distinction between consonants and vowels can be learned by observing their tendency to alternate in speech; (ii) roots and patterns can be identified as sequences of the previously discovered consonants and vowels respectively. The proposed algorithm uses a purely distributional method for partitioning symbols. Then it applies analogical principles to identify a preliminary set of reliable roots and patterns, and gradually enlarge it. This extension process is guided by an evaluation procedure based on the minimum description length principle, in line with the approach to morphological learning embodied in LINGUISTICA (Goldsmith, 2001). The algorithm is implemented as a computer program named ARABICA; it is evaluated with regard to its ability to account for the system of plural formation in a corpus of Arabic nouns. This thesis shows that complex linguistic structures can be discovered without recourse to a rich set of a priori hypotheses about the phenomena under consideration. It illustrates the possible synergy between learning mechanisms operating at distinct levels of linguistic description, and attempts to determine where and why such a cooperation fails. It concludes that the tension between the universality of the consonant-vowel distinction and the specificity of root-and-pattern structure is crucial for understanding the advantages and weaknesses of this approach.