930 resultados para four-point probe method
Resumo:
This paper presents a new non parametric atlas registration framework, derived from the optical flow model and the active contour theory, applied to automatic subthalamic nucleus (STN) targeting in deep brain stimulation (DBS) surgery. In a previous work, we demonstrated that the STN position can be predicted based on the position of surrounding visible structures, namely the lateral and third ventricles. A STN targeting process can thus be obtained by registering these structures of interest between a brain atlas and the patient image. Here we aim to improve the results of the state of the art targeting methods and at the same time to reduce the computational time. Our simultaneous segmentation and registration model shows mean STN localization errors statistically similar to the most performing registration algorithms tested so far and to the targeting expert's variability. Moreover, the computational time of our registration method is much lower, which is a worthwhile improvement from a clinical point of view.
Resumo:
Background: Our goal was to determine whether short-term intermittent hypoxia exposure, at a level well tolerated by healthy humans and previously shown by our group to increase EPO and erythropoiesis, could mobilizehematopoietic stem cells (HSC) and increase their presence in peripheral circulation. Methods: Four healthy male subjects were subjected to three different protocols: one with only a hypoxic stimulus (OH), another with a hypoxic stimulus plus muscle electrostimulation (HME) and the third with only muscle electrostimulation (OME). Intermittent hypobaric hypoxia exposureconsisted of only three sessions of three hours at barometric pressure 540 hPa (equivalent to an altitude of 5000 m) for three consecutive days, whereas muscular electrostimulation was performed in two separate periods of 25 min in each session. Blood samples were obtained from an antecubital vein on three consecutive days immediately before the experiment and 24 h, 48 h, 4 days and 7 days after the last day of hypoxic exposure. Results: There was a clear increase in the number of circulating CD34+ cells after combined hypobaric hypoxia and muscular electrostimulation. This response was not observed after the isolated application of the same stimuli. Conclusion: Our results open a new application field for hypobaric systems as a way to increase efficiency in peripheral HSC collection.
Resumo:
Background: Our goal was to determine whether short-term intermittent hypoxia exposure, at a level well tolerated by healthy humans and previously shown by our group to increase EPO and erythropoiesis, could mobilizehematopoietic stem cells (HSC) and increase their presence in peripheral circulation. Methods: Four healthy male subjects were subjected to three different protocols: one with only a hypoxic stimulus (OH), another with a hypoxic stimulus plus muscle electrostimulation (HME) and the third with only muscle electrostimulation (OME). Intermittent hypobaric hypoxia exposureconsisted of only three sessions of three hours at barometric pressure 540 hPa (equivalent to an altitude of 5000 m) for three consecutive days, whereas muscular electrostimulation was performed in two separate periods of 25 min in each session. Blood samples were obtained from an antecubital vein on three consecutive days immediately before the experiment and 24 h, 48 h, 4 days and 7 days after the last day of hypoxic exposure. Results: There was a clear increase in the number of circulating CD34+ cells after combined hypobaric hypoxia and muscular electrostimulation. This response was not observed after the isolated application of the same stimuli. Conclusion: Our results open a new application field for hypobaric systems as a way to increase efficiency in peripheral HSC collection.
Resumo:
In the administration, planning, design, and maintenance of road systems, transportation professionals often need to choose between alternatives, justify decisions, evaluate tradeoffs, determine how much to spend, set priorities, assess how well the network meets traveler needs, and communicate the basis for their actions to others. A variety of technical guidelines, tools, and methods have been developed to help with these activities. Such work aids include design criteria guidelines, design exception analysis methods, needs studies, revenue allocation schemes, regional planning guides, designation of minimum standards, sufficiency ratings, management systems, point based systems to determine eligibility for paving, functional classification, and bridge ratings. While such tools play valuable roles, they also manifest a number of deficiencies and are poorly integrated. Design guides tell what solutions MAY be used, they aren't oriented towards helping find which one SHOULD be used. Design exception methods help justify deviation from design guide requirements but omit consideration of important factors. Resource distribution is too often based on dividing up what's available rather than helping determine how much should be spent. Point systems serve well as procedural tools but are employed primarily to justify decisions that have already been made. In addition, the tools aren't very scalable: a system level method of analysis seldom works at the project level and vice versa. In conjunction with the issues cited above, the operation and financing of the road and highway system is often the subject of criticisms that raise fundamental questions: What is the best way to determine how much money should be spent on a city or a county's road network? Is the size and quality of the rural road system appropriate? Is too much or too little money spent on road work? What parts of the system should be upgraded and in what sequence? Do truckers receive a hidden subsidy from other motorists? Do transportation professions evaluate road situations from too narrow of a perspective? In considering the issues and questions the author concluded that it would be of value if one could identify and develop a new method that would overcome the shortcomings of existing methods, be scalable, be capable of being understood by the general public, and utilize a broad viewpoint. After trying out a number of concepts, it appeared that a good approach would be to view the road network as a sub-component of a much larger system that also includes vehicles, people, goods-in-transit, and all the ancillary items needed to make the system function. Highway investment decisions could then be made on the basis of how they affect the total cost of operating the total system. A concept, named the "Total Cost of Transportation" method, was then developed and tested. The concept rests on four key principles: 1) that roads are but one sub-system of a much larger 'Road Based Transportation System', 2) that the size and activity level of the overall system are determined by market forces, 3) that the sum of everything expended, consumed, given up, or permanently reserved in building the system and generating the activity that results from the market forces represents the total cost of transportation, and 4) that the economic purpose of making road improvements is to minimize that total cost. To test the practical value of the theory, a special database and spreadsheet model of Iowa's county road network was developed. This involved creating a physical model to represent the size, characteristics, activity levels, and the rates at which the activities take place, developing a companion economic cost model, then using the two in tandem to explore a variety of issues. Ultimately, the theory and model proved capable of being used in full system, partial system, single segment, project, and general design guide levels of analysis. The method appeared to be capable of remedying many of the existing work method defects and to answer society's transportation questions from a new perspective.
Resumo:
The Iowa State Highway Commission Laboratory is called upon to determine the cement content of hardened concrete when field problems relating to batch weights are encountered. The standard test for determining the cement content is ASTM C-85. An investigation of this method by the New Jersey State Highway Department involving duplicate samples and four cooperating laboratories produced very erratic results, however, the results obtained by this method have not been directly compared to known cement contents of concrete made with various cements and various aggregates used in Iowa.
Resumo:
BACKGROUND: Pediatric intensive care patients represent a population at high risk for drug-related problems. There are few studies that compare the activity of clinical pharmacists between countries. OBJECTIVE: To describe the drug-related problems identified and interventions by four pharmacists in a pediatric cardiac and intensive care unit. SETTING: Four pediatric centers in France, Quebec, Switzerland and Belgium. METHOD: This was a six-month multicenter, descriptive and prospective study conducted from August 1, 2009 to January 31, 2010. Drug-related problems and clinical interventions were compiled from four pediatric centers in France, Quebec, Switzerland and Belgium. Data on patients, drugs, intervention, documentation, approval and estimated impact were compiled. MAIN OUTCOME MEASURE: Number and type of drug-related problems encountered in a large pediatric inpatient population. RESULTS: A total of 996 interventions were recorded: 238 (24 %) in France, 278 (28 %) in Quebec, 351 (35 %) in Switzerland and 129 (13 %) in Belgium. These interventions targeted 270 patients (median 21 months old, 53 % male): 88 (33 %) in France, 56 (21 %) in Quebec, 57 (21 %) in Switzerland and 69 (26 %) in Belgium. The main drug-related problems were inappropriate administration technique (29 %), untreated indication (25 %) and supra-therapeutic dose (11 %). The pharmacists' interventions were mostly optimizing the mode of administration (22 %), dose adjustment (20 %) and therapeutic monitoring (16 %). The two major drug classes that led to interventions were anti-infectives for systemic use (23 %) and digestive system and metabolism drugs (22 %). Interventions mainly involved residents and all clinical staff (21 %). Among the 878 (88 %) proposed interventions requiring physician approval, 860 (98 %) were accepted. CONCLUSION: This descriptive study illustrates drug-related problems and the ability of clinical pharmacists to identify and resolve them in pediatric intensive care units in four French-speaking countries.
Resumo:
Résumé Cette thèse est consacrée à l'analyse, la modélisation et la visualisation de données environnementales à référence spatiale à l'aide d'algorithmes d'apprentissage automatique (Machine Learning). L'apprentissage automatique peut être considéré au sens large comme une sous-catégorie de l'intelligence artificielle qui concerne particulièrement le développement de techniques et d'algorithmes permettant à une machine d'apprendre à partir de données. Dans cette thèse, les algorithmes d'apprentissage automatique sont adaptés pour être appliqués à des données environnementales et à la prédiction spatiale. Pourquoi l'apprentissage automatique ? Parce que la majorité des algorithmes d'apprentissage automatiques sont universels, adaptatifs, non-linéaires, robustes et efficaces pour la modélisation. Ils peuvent résoudre des problèmes de classification, de régression et de modélisation de densité de probabilités dans des espaces à haute dimension, composés de variables informatives spatialisées (« géo-features ») en plus des coordonnées géographiques. De plus, ils sont idéaux pour être implémentés en tant qu'outils d'aide à la décision pour des questions environnementales allant de la reconnaissance de pattern à la modélisation et la prédiction en passant par la cartographie automatique. Leur efficacité est comparable au modèles géostatistiques dans l'espace des coordonnées géographiques, mais ils sont indispensables pour des données à hautes dimensions incluant des géo-features. Les algorithmes d'apprentissage automatique les plus importants et les plus populaires sont présentés théoriquement et implémentés sous forme de logiciels pour les sciences environnementales. Les principaux algorithmes décrits sont le Perceptron multicouches (MultiLayer Perceptron, MLP) - l'algorithme le plus connu dans l'intelligence artificielle, le réseau de neurones de régression généralisée (General Regression Neural Networks, GRNN), le réseau de neurones probabiliste (Probabilistic Neural Networks, PNN), les cartes auto-organisées (SelfOrganized Maps, SOM), les modèles à mixture Gaussiennes (Gaussian Mixture Models, GMM), les réseaux à fonctions de base radiales (Radial Basis Functions Networks, RBF) et les réseaux à mixture de densité (Mixture Density Networks, MDN). Cette gamme d'algorithmes permet de couvrir des tâches variées telle que la classification, la régression ou l'estimation de densité de probabilité. L'analyse exploratoire des données (Exploratory Data Analysis, EDA) est le premier pas de toute analyse de données. Dans cette thèse les concepts d'analyse exploratoire de données spatiales (Exploratory Spatial Data Analysis, ESDA) sont traités selon l'approche traditionnelle de la géostatistique avec la variographie expérimentale et selon les principes de l'apprentissage automatique. La variographie expérimentale, qui étudie les relations entre pairs de points, est un outil de base pour l'analyse géostatistique de corrélations spatiales anisotropiques qui permet de détecter la présence de patterns spatiaux descriptible par une statistique. L'approche de l'apprentissage automatique pour l'ESDA est présentée à travers l'application de la méthode des k plus proches voisins qui est très simple et possède d'excellentes qualités d'interprétation et de visualisation. Une part importante de la thèse traite de sujets d'actualité comme la cartographie automatique de données spatiales. Le réseau de neurones de régression généralisée est proposé pour résoudre cette tâche efficacement. Les performances du GRNN sont démontrées par des données de Comparaison d'Interpolation Spatiale (SIC) de 2004 pour lesquelles le GRNN bat significativement toutes les autres méthodes, particulièrement lors de situations d'urgence. La thèse est composée de quatre chapitres : théorie, applications, outils logiciels et des exemples guidés. Une partie importante du travail consiste en une collection de logiciels : Machine Learning Office. Cette collection de logiciels a été développée durant les 15 dernières années et a été utilisée pour l'enseignement de nombreux cours, dont des workshops internationaux en Chine, France, Italie, Irlande et Suisse ainsi que dans des projets de recherche fondamentaux et appliqués. Les cas d'études considérés couvrent un vaste spectre de problèmes géoenvironnementaux réels à basse et haute dimensionnalité, tels que la pollution de l'air, du sol et de l'eau par des produits radioactifs et des métaux lourds, la classification de types de sols et d'unités hydrogéologiques, la cartographie des incertitudes pour l'aide à la décision et l'estimation de risques naturels (glissements de terrain, avalanches). Des outils complémentaires pour l'analyse exploratoire des données et la visualisation ont également été développés en prenant soin de créer une interface conviviale et facile à l'utilisation. Machine Learning for geospatial data: algorithms, software tools and case studies Abstract The thesis is devoted to the analysis, modeling and visualisation of spatial environmental data using machine learning algorithms. In a broad sense machine learning can be considered as a subfield of artificial intelligence. It mainly concerns with the development of techniques and algorithms that allow computers to learn from data. In this thesis machine learning algorithms are adapted to learn from spatial environmental data and to make spatial predictions. Why machine learning? In few words most of machine learning algorithms are universal, adaptive, nonlinear, robust and efficient modeling tools. They can find solutions for the classification, regression, and probability density modeling problems in high-dimensional geo-feature spaces, composed of geographical space and additional relevant spatially referenced features. They are well-suited to be implemented as predictive engines in decision support systems, for the purposes of environmental data mining including pattern recognition, modeling and predictions as well as automatic data mapping. They have competitive efficiency to the geostatistical models in low dimensional geographical spaces but are indispensable in high-dimensional geo-feature spaces. The most important and popular machine learning algorithms and models interesting for geo- and environmental sciences are presented in details: from theoretical description of the concepts to the software implementation. The main algorithms and models considered are the following: multi-layer perceptron (a workhorse of machine learning), general regression neural networks, probabilistic neural networks, self-organising (Kohonen) maps, Gaussian mixture models, radial basis functions networks, mixture density networks. This set of models covers machine learning tasks such as classification, regression, and density estimation. Exploratory data analysis (EDA) is initial and very important part of data analysis. In this thesis the concepts of exploratory spatial data analysis (ESDA) is considered using both traditional geostatistical approach such as_experimental variography and machine learning. Experimental variography is a basic tool for geostatistical analysis of anisotropic spatial correlations which helps to understand the presence of spatial patterns, at least described by two-point statistics. A machine learning approach for ESDA is presented by applying the k-nearest neighbors (k-NN) method which is simple and has very good interpretation and visualization properties. Important part of the thesis deals with a hot topic of nowadays, namely, an automatic mapping of geospatial data. General regression neural networks (GRNN) is proposed as efficient model to solve this task. Performance of the GRNN model is demonstrated on Spatial Interpolation Comparison (SIC) 2004 data where GRNN model significantly outperformed all other approaches, especially in case of emergency conditions. The thesis consists of four chapters and has the following structure: theory, applications, software tools, and how-to-do-it examples. An important part of the work is a collection of software tools - Machine Learning Office. Machine Learning Office tools were developed during last 15 years and was used both for many teaching courses, including international workshops in China, France, Italy, Ireland, Switzerland and for realizing fundamental and applied research projects. Case studies considered cover wide spectrum of the real-life low and high-dimensional geo- and environmental problems, such as air, soil and water pollution by radionuclides and heavy metals, soil types and hydro-geological units classification, decision-oriented mapping with uncertainties, natural hazards (landslides, avalanches) assessments and susceptibility mapping. Complementary tools useful for the exploratory data analysis and visualisation were developed as well. The software is user friendly and easy to use.
Resumo:
The diagnosis of chronic inflammatory demyelinating polyneuropathy (CIDP) is based on a set of clinical and neurophysiological parameters. However, in clinical practice, CIDP remains difficult to diagnose in atypical cases. In the present study, 32 experts from 22 centers (the French CIDP study group) were asked individually to score four typical, and seven atypical, CIDP observations (TOs and AOs, respectively) reported by other physicians, according to the Delphi method. The diagnoses of CIDP were confirmed by the group in 96.9 % of the TO and 60.1 % of the AO (p < 0.0001). There was a positive correlation between the consensus of CIDP diagnosis and the demyelinating features (r = 0.82, p < 0.004). The European CIDP classification was used in 28.3 % of the TOs and 18.2 % of the AOs (p < 0.002). The French CIDP study group diagnostic strategy was used in 90 % of the TOs and 61 % of the AOs (p < 0.0001). In 3 % of the TOs and 21.6 % of the AOs, the experts had difficulty determining a final diagnosis due to a lack of information. This study shows that a set of criteria and a diagnostic strategy are not sufficient to reach a consensus for the diagnosis of atypical CIDP in clinical practice.
Resumo:
Determination of brain glucose transport kinetics in vivo at steady-state typically does not allow distinguishing apparent maximum transport rate (T(max)) from cerebral consumption rate. Using a four-state conformational model of glucose transport, we show that simultaneous dynamic measurement of brain and plasma glucose concentrations provide enough information for independent and reliable determination of the two rates. In addition, although dynamic glucose homeostasis can be described with a reversible Michaelis-Menten model, which is implicit to the large iso-inhibition constant (K(ii)) relative to physiological brain glucose content, we found that the apparent affinity constant (K(t)) was better determined with the four-state conformational model of glucose transport than with any of the other models tested. Furthermore, we confirmed the utility of the present method to determine glucose transport and consumption by analysing the modulation of both glucose transport and consumption by anaesthesia conditions that modify cerebral activity. In particular, deep thiopental anaesthesia caused a significant reduction of both T(max) and cerebral metabolic rate for glucose consumption. In conclusion, dynamic measurement of brain glucose in vivo in function of plasma glucose allows robust determination of both glucose uptake and consumption kinetics.
Resumo:
Based on provious (Hemelrijk 1998; Puga-González, Hildenbrant & Hemelrijk 2009), we have developed an agent-based model and software, called A-KinGDom, which allows us to simulate the emergence of the social structure in a group of non-human primates. The model includes dominance and affiliative interactions and incorporate s two main innovations (preliminary dominance interactions and a kinship factor), which allow us to define four different attack and affiliative strategies. In accordance with these strategies, we compared the data obtained under four simulation conditions with the results obtained in a provious study (Dolado & Beltran 2012) involving empirical observations of a captive group of mangabeys (Cercocebus torquatus)
Resumo:
The factor structure of a back translated Spanish version (Lega, Caballo and Ellis, 2002) of the Attitudes and Beliefs Inventory (ABI) (Burgess, 1990) is analyzed in a sample of 250 university students.The Spanish version of the ABI is a 48-items self-report inventory using a 5-point Likert scale that assesses rational and irrational attitudes and beliefs. 24-items cover two dimensions of irrationality: a) areas of content (3 subscales), and b) styles of thinking (4 subscales).An Exploratory Factor Analysis (Parallel Analysis with Unweighted Least Squares method and Promin rotation) was performed with the FACTOR 9.20 software (Lorenzo-Seva and Ferrando, 2013).The results reproduced the main four styles of irrational thinking in relation with the three specific contents of irrational beliefs. However, two factors showed a complex configuration with important cross-loadings of different items in content and style. More analyses are needed to review the specific content and style of such items.
Resumo:
Global positioning systems (GPS) offer a cost-effective and efficient method to input and update transportation data. The spatial location of objects provided by GPS is easily integrated into geographic information systems (GIS). The storage, manipulation, and analysis of spatial data are also relatively simple in a GIS. However, many data storage and reporting methods at transportation agencies rely on linear referencing methods (LRMs); consequently, GPS data must be able to link with linear referencing. Unfortunately, the two systems are fundamentally incompatible in the way data are collected, integrated, and manipulated. In order for the spatial data collected using GPS to be integrated into a linear referencing system or shared among LRMs, a number of issues need to be addressed. This report documents and evaluates several of those issues and offers recommendations. In order to evaluate the issues associated with integrating GPS data with a LRM, a pilot study was created. To perform the pilot study, point features, a linear datum, and a spatial representation of a LRM were created for six test roadway segments that were located within the boundaries of the pilot study conducted by the Iowa Department of Transportation linear referencing system project team. Various issues in integrating point features with a LRM or between LRMs are discussed and recommendations provided. The accuracy of the GPS is discussed, including issues such as point features mapping to the wrong segment. Another topic is the loss of spatial information that occurs when a three-dimensional or two-dimensional spatial point feature is converted to a one-dimensional representation on a LRM. Recommendations such as storing point features as spatial objects if necessary or preserving information such as coordinates and elevation are suggested. The lack of spatial accuracy characteristic of most cartography, on which LRM are often based, is another topic discussed. The associated issues include linear and horizontal offset error. The final topic discussed is some of the issues in transferring point feature data between LRMs.
Resumo:
The factor structure of a back translated Spanish version (Lega, Caballo and Ellis, 2002) of the Attitudes and Beliefs Inventory (ABI) (Burgess, 1990) is analyzed in a sample of 250 university students.The Spanish version of the ABI is a 48-items self-report inventory using a 5-point Likert scale that assesses rational and irrational attitudes and beliefs. 24-items cover two dimensions of irrationality: a) areas of content (3 subscales), and b) styles of thinking (4 subscales).An Exploratory Factor Analysis (Parallel Analysis with Unweighted Least Squares method and Promin rotation) was performed with the FACTOR 9.20 software (Lorenzo-Seva and Ferrando, 2013).The results reproduced the main four styles of irrational thinking in relation with the three specific contents of irrational beliefs. However, two factors showed a complex configuration with important cross-loadings of different items in content and style. More analyses are needed to review the specific content and style of such items.
Resumo:
Yritykset ovat pakotettuja erilaisiin yhteistyömuotoihin pärjätäkseen kiristyvässä kilpailussa. Yhteistyösuhteet kulkevat eri nimillä riippuen teollisuuden alasta ja siitä, missä kohtaa toimitusketjua ne toteutuvat, mutta periaatteessa kaikki pohjautuvat samaan ideaan kuin Vendor Managed Inventory (VMI); varastoon jakysyntään liittyvä tieto jaetaan toimitusketjun eri osapuolien kesken, jotta tuotanto, jakelu ja varastonhallinta olisi mahdollista optimoida. Vendor Managed Inventory on ideana yksinkertainen, mutta vaatii onnistuakseen paljon. Perusolettamus on, että toimittajan on kyettävä hallinnoimaan asiakkaan varastoa paremmin kuin asiakas itse. Tämä ei kuitenkaan ole mahdollista ilman riittävää yhteistyötä, oikeanlaista informaatiota tai sopivia tuoteominaisuuksia. Tämän työn tarkoitus on esitellä kriittiset menestystekijät valmistajan kannalta, kun näkyvyys todelliseen kysyntään on heikko ja kyseessäolevat tuotteet ovat ominaisuuksiltaan toimintamalliin huonosti soveltuvia. VMItoimintamallin soveltuvuus matkapuhelimia valmistavan yrityksen liiketoimintaan, sekä sen vaikutus asiakasyhteistyöhön, kannattavuuteen ja toiminnan tehostamiseen on myös tutkittu.
Resumo:
Controversy exists about the best method to achieve bone fusion in four-corner arthrodesis. Thirty-five patients who underwent this procedure by our technique were included in the study. Surgical indications were stage II-III SLAC wrist, stage II SNAC wrist and severe traumatic midcarpal joint injury. Mean follow-up was 4.6 years. Mean active flexion and extension were 34 degrees and 30 degrees respectively; grip strength recovery was 79%. Radiological consolidation was achieved in all cases. The mean DASH score was 23 and the postoperative pain improvement by visual analogue scale was statistically significant. Return to work was possible at 4 months for the average patient. Complications were a capitate fracture in one patient and the need for hardware removal in four cases. Four-corner bone wrist arthrodesis by dorsal rectangular plating achieves an acceptable preservation of range of motion with good pain relief, an excellent consolidation rate and minimal complications.