871 resultados para Extended Support Vector Machine
Resumo:
There is scarce data about the importance of phylogroups and virulence factors (VF) in bloodstream infections (BSI) caused by extended-spectrum β-lactamase-producing Escherichia coli (ESBLEC). A prospective multicenter Spanish cohort including 191 cases of BSI due to ESBLEC was studied. Phylogroups and 25 VF genes were investigated by PCR. ESBLEC were classified into clusters according to their virulence profiles. The association of phylogropus, VF, and clusters with epidemiological features were studied using multivariate analysis. Overall, 57.6%, 26.7%, and 15.7% of isolates belonged to A/B1, D and B2 phylogroups, respectively. By multivariate analysis (adjusted OR [95% CI]), virulence cluster C2 was independently associated with urinary tract source (5.05 [0.96-25.48]); cluster C4 with sources other than urinary of biliary tract (2.89 [1.05-7.93]), and cluster C5 with BSI in non-predisposed patients (2.80 [0.99-7.93]). Isolates producing CTX-M-9 group ESBLs and from phylogroup D predominated among cluster C2 and C5, while CTX-M-1 group of ESBL and phylogroup B2 predominantes among C4 isolates. These results suggest that host factors and previous antimicrobial use were more important than phylogroup or specific VF in the occurrence of BSI due to ESBLEC. However, some associations between virulence clusters and some specific epidemiological features were found.
Resumo:
Escherichia coli is commonly involved in infections with a heavy bacterial burden. Piperacillin-tazobactam and carbapenems are among the recommended empirical treatments for health care-associated complicated intra-abdominal infections. In contrast to amoxicillin-clavulanate, both have reduced in vitro activity in the presence of high concentrations of extended-spectrum β-lactamase (ESBL)-producing and non-ESBL-producing E. coli bacteria. Our goal was to compare the efficacy of these antimicrobials against different concentrations of two clinical E. coli strains, one an ESBL-producer and the other a non-ESBL-producer, in a murine sepsis model. An experimental sepsis model {~5.5 log10 CFU/g [low inoculum concentration (LI)] or ~7.5 log(10) CFU/g [high inoculum concentration (HI)]} using E. coli strains ATCC 25922 (non-ESBL producer) and Ec1062 (CTX-M-14 producer), which are susceptible to the three antimicrobials, was used. Amoxicillin-clavulanate (50/12.5 mg/kg given intramuscularly [i.m.]), piperacillin-tazobactam (25/3.125 mg/kg given intraperitoneally [i.p.]), and imipenem (30 mg/kg i.m.) were used. Piperacillin-tazobactam and imipenem reduced spleen ATCC 25922 strain concentrations (-2.53 and -2.14 log10 CFU/g [P < 0.05, respectively]) in the HI versus LI groups, while amoxicillin-clavulanate maintained its efficacy (-1.01 log10 CFU/g [no statistically significant difference]). Regarding the Ec1062 strain, the antimicrobials showed lower efficacy in the HI than in the LI groups: -0.73, -1.89, and -1.62 log10 CFU/g (P < 0.05, for piperacillin-tazobactam, imipenem, and amoxicillin-clavulanate, respectively, although imipenem and amoxicillin-clavulanate were more efficacious than piperacillin-tazobactam). An adapted imipenem treatment (based on the time for which the serum drug concentration remained above the MIC obtained with a HI of the ATCC 25922 strain) improved its efficacy to -1.67 log10 CFU/g (P < 0.05). These results suggest that amoxicillin-clavulanate could be an alternative to imipenem treatment of infections caused by ESBL- and non-ESBL-producing E. coli strains in patients with therapeutic failure with piperacillin-tazobactam.
Resumo:
INTRODUCTION Finding therapeutic alternatives to carbapenems in infections caused by extended-spectrum β-lactamase-producing Escherichia coli (ESBL-EC) is imperative. Although fosfomycin was discovered more than 40 years ago, it was not investigated in accordance with current standards and so is not used in clinical practice except in desperate situations. It is one of the so-called neglected antibiotics of high potential interest for the future. METHODS AND ANALYSIS The main objective of this project is to demonstrate the clinical non-inferiority of intravenous fosfomycin with regard to meropenem for treating bacteraemic urinary tract infections (UTI) caused by ESBL-EC. This is a 'real practice' multicentre, open-label, phase III randomised controlled trial, designed to compare the clinical and microbiological efficacy, and safety of intravenous fosfomycin (4 g/6 h) and meropenem (1 g/8 h) as targeted therapy for this infection; a change to oral therapy is permitted after 5 days in both arms, in accordance with predetermined options. The study design follows the latest recommendations for designing trials investigating new options for multidrug-resistant bacteria. Secondary objectives include the study of fosfomycin concentrations in plasma and the impact of both drugs on intestinal colonisation by multidrug-resistant Gram-negative bacilli. ETHICS AND DISSEMINATION Ethical approval was obtained from the Andalusian Coordinating Institutional Review Board (IRB) for Biomedical Research (Referral Ethics Committee), which obtained approval from the local ethics committees at all participating sites in Spain (22 sites). Data will be presented at international conferences and published in peer-reviewed journals. DISCUSSION This project is proposed as an initial step in the investigation of an orphan antimicrobial of low cost with high potential as a therapeutic alternative in common infections such as UTI in selected patients. These results may have a major impact on the use of antibiotics and the development of new projects with this drug, whether as monotherapy or combination therapy. TRIAL REGISTRATION NUMBER NCT02142751. EudraCT no: 2013-002922-21. Protocol V.1.1 dated 14 March 2014.
Resumo:
Extended-spectrum β-lactamases (ESBL) of the CTX-M, SHV, and TEM families were recognized in 76 (67%), 31 (27%), and 6 (5%) isolates, respectively, among 162 ESBL-producing Klebsiella pneumoniae (ESBL-Kp) strains obtained in a multicenter study in Spain. Predisposing factors for ESBL-Kp acquisition included invasive procedures, mechanical ventilation, and previous antimicrobial use.
Resumo:
A total of 1,021 extended-spectrum-β-lactamase-producing Escherichia coli (ESBLEC) isolates obtained in 2006 during a Spanish national survey conducted in 44 hospitals were analyzed for the presence of the O25b:H4-B2-ST131 (sequence type 131) clonal group. Overall, 195 (19%) O25b-ST131 isolates were detected, with prevalence rates ranging from 0% to 52% per hospital. Molecular characterization of 130 representative O25b-ST131 isolates showed that 96 (74%) were positive for CTX-M-15, 15 (12%) for CTX-M-14, 9 (7%) for SHV-12, 6 (5%) for CTX-M-9, 5 (4%) for CTX-M-32, and 1 (0.7%) each for CTX-M-3 and the new ESBL enzyme CTX-M-103. The 130 O25b-ST131 isolates exhibited relatively high virulence scores (mean, 14.4 virulence genes). Although the virulence profiles of the O25b-ST131 isolates were fairly homogeneous, they could be classified into four main virotypes based on the presence or absence of four distinctive virulence genes: virotypes A (22%) (afa FM955459 positive, iroN negative, ibeA negative, sat positive or negative), B (31%) (afa FM955459 negative, iroN positive, ibeA negative, sat positive or negative), C (32%) (afa FM955459 negative, iroN negative, ibeA negative, sat positive), and D (13%) (afa FM955459 negative, iroN positive or negative, ibeA positive, sat positive or negative). The four virotypes were also identified in other countries, with virotype C being overrepresented internationally. Correspondingly, an analysis of XbaI macrorestriction profiles revealed four major clusters, which were largely virotype specific. Certain epidemiological and clinical features corresponded with the virotype. Statistically significant virotype-specific associations included, for virotype B, older age and a lower frequency of infection (versus colonization), for virotype C, a higher frequency of infection, and for virotype D, younger age and community-acquired infections. In isolates of the O25b:H4-B2-ST131 clonal group, these findings uniquely define four main virotypes, which are internationally distributed, correspond with pulsed-field gel electrophoresis (PFGE) profiles, and exhibit distinctive clinical-epidemiological associations.
Resumo:
We investigated the impact of the piperacillin-tazobactam MIC in the outcome of 39 bloodstream infections due to extended-spectrum-β-lactamase-producing Escherichia coli. All 11 patients with urinary tract infections survived, irrespective of the MIC. For other sources, 30-day mortality was lower for isolates with a MIC of ≤ 2 mg/liter than for isolates with a higher MIC (0% versus 41.1%; P = 0.02).
Resumo:
Objective: Assess the understanding of adolescents regarding the social support received in situations of domestic violence. Method: A qualitative study with data collection carried out through focus groups with 17 adolescent victims of domestic violence, institutionally welcomed in Campinas-SP, and through semi-structured interviews with seven of these adolescents. Information was analyzed by content analysis, thematic modality. Results: Observing the thematic categories it was found that social support for the subjects came from the extended family, the community, the Guardianship Council, the interpersonal relationships established at the user embracement institution and from the religiosity/spirituality. Conclusion: The mentioned sources of support deserve to be enhanced and expanded. With the current complexity of the morbidity and mortality profiles, especially in children and adolescents, the (re)signification and the (re)construction of health actions is imperative.
Resumo:
Most research on single machine scheduling has assumedthe linearity of job holding costs, which is arguablynot appropriate in some applications. This motivates ourstudy of a model for scheduling $n$ classes of stochasticjobs on a single machine, with the objective of minimizingthe total expected holding cost (discounted or undiscounted). We allow general holding cost rates that are separable,nondecreasing and convex on the number of jobs in eachclass. We formulate the problem as a linear program overa certain greedoid polytope, and establish that it issolved optimally by a dynamic (priority) index rule,whichextends the classical Smith's rule (1956) for the linearcase. Unlike Smith's indices, defined for each class, ournew indices are defined for each extended class, consistingof a class and a number of jobs in that class, and yieldan optimal dynamic index rule: work at each time on a jobwhose current extended class has larger index. We furthershow that the indices possess a decomposition property,as they are computed separately for each class, andinterpret them in economic terms as marginal expected cost rate reductions per unit of expected processing time.We establish the results by deploying a methodology recentlyintroduced by us [J. Niño-Mora (1999). "Restless bandits,partial conservation laws, and indexability. "Forthcomingin Advances in Applied Probability Vol. 33 No. 1, 2001],based on the satisfaction by performance measures of partialconservation laws (PCL) (which extend the generalizedconservation laws of Bertsimas and Niño-Mora (1996)):PCL provide a polyhedral framework for establishing theoptimality of index policies with special structure inscheduling problems under admissible objectives, which weapply to the model of concern.
Resumo:
BACKGROUND: Aminoglycosides are mandatory in the treatment of severe infections in burns. However, their pharmacokinetics are difficult to predict in critically ill patients. Our objective was to describe the pharmacokinetic parameters of high doses of tobramycin administered at extended intervals in severely burned patients. METHODS: We prospectively enrolled 23 burned patients receiving tobramycin in combination therapy for Pseudomonas species infections in a burn ICU over 2 years in a therapeutic drug monitoring program. Trough and post peak tobramycin levels were measured to adjust drug dosage. Pharmacokinetic parameters were derived from two points first order kinetics. RESULTS: Tobramycin peak concentration was 7.4 (3.1-19.6)microg/ml and Cmax/MIC ratio 14.8 (2.8-39.2). Half-life was 6.9 (range 1.8-24.6)h with a distribution volume of 0.4 (0.2-1.0)l/kg. Clearance was 35 (14-121)ml/min and was weakly but significantly correlated with creatinine clearance. CONCLUSION: Tobramycin had a normal clearance, but an increased volume of distribution and a prolonged half-life in burned patients. However, the pharmacokinetic parameters of tobramycin are highly variable in burned patients. These data support extended interval administration and strongly suggest that aminoglycosides should only be used within a structured pharmacokinetic monitoring program.
Resumo:
Background: Hox and ParaHox gene clusters are thought to have resulted from the duplication of a ProtoHox gene cluster early in metazoan evolution. However, the origin and evolution of the other genes belonging to the extended Hox group of homeobox-containing genes, that is, Mox and Evx, remains obscure. We constructed phylogenetic trees with mouse, amphioxus and Drosophila extended Hox and other related Antennapedia-type homeobox gene sequences and analyzed the linkage data available for such genes.Results: We claim that neither Mox nor Evx is a Hox or ParaHox gene. We propose a scenariothat reconciles phylogeny with linkage data, in which an Evx/Mox ancestor gene linked to aProtoHox cluster was involved in a segmental tandem duplication event that generated an arrayof all Hox-like genes, referred to as the `coupled¿ cluster. A chromosomal breakage within thiscluster explains the current composition of the extended Hox cluster (with Evx, Hox and Moxgenes) and the ParaHox cluster.Conclusions: Most studies dealing with the origin and evolution of Hox and ParaHox clustershave not included the Hox-related genes Mox and Evx. Our phylogenetic analyses and theavailable linkage data in mammalian genomes support an evolutionary scenario in which anancestor of Evx and Mox was linked to the ProtoHox cluster, and that a tandem duplication of alarge genomic region early in metazoan evolution generated the Hox and ParaHox clusters, plusthe cluster-neighbors Evx and Mox. The large `coupled¿ Hox-like cluster EvxHox/MoxParaHox wassubsequently broken, thus grouping the Mox and Evx genes to the Hox clusters, and isolating theParaHox cluster.
Resumo:
In recent years, ultra-thin whitetopping (UTW) has evolved as a viable rehabilitation technique for deteriorated asphalt cement concrete (ACC) pavement. Numerous UTW projects have been constructed and tested, enabling researchers to identify key elements contributing to their successful performance. These elements include foundation support, interface bonding condition, portland cement concrete (PCC) overlay thickness, synthetic fiber reinforcement usage, joint spacing, and joint sealing. The interface bonding condition is the most important of these elements. It enables the pavement to act as a composite structure, thus reducing tensile stresses and allowing an ultra-thin PCC overlay to perform as intended. The Iowa Department of Transportation (Iowa DOT) UTW project (HR-559) initiated UTW in Iowa. The project is located on Iowa Highway 21 between Iowa Highway 212 and U.S. Highway 6 in Iowa County, near Belle Plaine, Iowa. The objective of this research was to investigate the interface bonding condition between an ultra-thin PCC overlay and an ACC base over time, considering the previously mentioned variables. This research lasted for five years, at which time it was extended an additional five years. The new phase of the project was initiated by removing cracked panels existing in the 2-inch thick PCC sections and replacing them with three inches of PCC. The project extension (TR 432) will provide an increased understanding of slab bonding conditions over a longer period, as well as knowledge regarding the behavior of the newly rehabilitated areas. In order to accomplish the goals of the project extension, Falling Weight Deflectometer (FWD) testing will continue to be conducted. Laboratory testing, field strain gage implementation, and coring will no longer be conducted. This report documents the planning and construction of the rehabilitation of HR 559 and the beginning of TR 432 during August of 1999.
Resumo:
The motivation for this research initiated from the abrupt rise and fall of minicomputers which were initially used both for industrial automation and business applications due to their significantly lower cost than their predecessors, the mainframes. Later industrial automation developed its own vertically integrated hardware and software to address the application needs of uninterrupted operations, real-time control and resilience to harsh environmental conditions. This has led to the creation of an independent industry, namely industrial automation used in PLC, DCS, SCADA and robot control systems. This industry employs today over 200'000 people in a profitable slow clockspeed context in contrast to the two mainstream computing industries of information technology (IT) focused on business applications and telecommunications focused on communications networks and hand-held devices. Already in 1990s it was foreseen that IT and communication would merge into one Information and communication industry (ICT). The fundamental question of the thesis is: Could industrial automation leverage a common technology platform with the newly formed ICT industry? Computer systems dominated by complex instruction set computers (CISC) were challenged during 1990s with higher performance reduced instruction set computers (RISC). RISC started to evolve parallel to the constant advancement of Moore's law. These developments created the high performance and low energy consumption System-on-Chip architecture (SoC). Unlike to the CISC processors RISC processor architecture is a separate industry from the RISC chip manufacturing industry. It also has several hardware independent software platforms consisting of integrated operating system, development environment, user interface and application market which enables customers to have more choices due to hardware independent real time capable software applications. An architecture disruption merged and the smartphone and tablet market were formed with new rules and new key players in the ICT industry. Today there are more RISC computer systems running Linux (or other Unix variants) than any other computer system. The astonishing rise of SoC based technologies and related software platforms in smartphones created in unit terms the largest installed base ever seen in the history of computers and is now being further extended by tablets. An underlying additional element of this transition is the increasing role of open source technologies both in software and hardware. This has driven the microprocessor based personal computer industry with few dominating closed operating system platforms into a steep decline. A significant factor in this process has been the separation of processor architecture and processor chip production and operating systems and application development platforms merger into integrated software platforms with proprietary application markets. Furthermore the pay-by-click marketing has changed the way applications development is compensated: Three essays on major trends in a slow clockspeed industry: The case of industrial automation 2014 freeware, ad based or licensed - all at a lower price and used by a wider customer base than ever before. Moreover, the concept of software maintenance contract is very remote in the app world. However, as a slow clockspeed industry, industrial automation has remained intact during the disruptions based on SoC and related software platforms in the ICT industries. Industrial automation incumbents continue to supply systems based on vertically integrated systems consisting of proprietary software and proprietary mainly microprocessor based hardware. They enjoy admirable profitability levels on a very narrow customer base due to strong technology-enabled customer lock-in and customers' high risk leverage as their production is dependent on fault-free operation of the industrial automation systems. When will this balance of power be disrupted? The thesis suggests how industrial automation could join the mainstream ICT industry and create an information, communication and automation (ICAT) industry. Lately the Internet of Things (loT) and weightless networks, a new standard leveraging frequency channels earlier occupied by TV broadcasting, have gradually started to change the rigid world of Machine to Machine (M2M) interaction. It is foreseeable that enough momentum will be created that the industrial automation market will in due course face an architecture disruption empowered by these new trends. This thesis examines the current state of industrial automation subject to the competition between the incumbents firstly through a research on cost competitiveness efforts in captive outsourcing of engineering, research and development and secondly researching process re- engineering in the case of complex system global software support. Thirdly we investigate the industry actors', namely customers, incumbents and newcomers, views on the future direction of industrial automation and conclude with our assessments of the possible routes industrial automation could advance taking into account the looming rise of the Internet of Things (loT) and weightless networks. Industrial automation is an industry dominated by a handful of global players each of them focusing on maintaining their own proprietary solutions. The rise of de facto standards like IBM PC, Unix and Linux and SoC leveraged by IBM, Compaq, Dell, HP, ARM, Apple, Google, Samsung and others have created new markets of personal computers, smartphone and tablets and will eventually also impact industrial automation through game changing commoditization and related control point and business model changes. This trend will inevitably continue, but the transition to a commoditized industrial automation will not happen in the near future.
Resumo:
Résumé Cette thèse est consacrée à l'analyse, la modélisation et la visualisation de données environnementales à référence spatiale à l'aide d'algorithmes d'apprentissage automatique (Machine Learning). L'apprentissage automatique peut être considéré au sens large comme une sous-catégorie de l'intelligence artificielle qui concerne particulièrement le développement de techniques et d'algorithmes permettant à une machine d'apprendre à partir de données. Dans cette thèse, les algorithmes d'apprentissage automatique sont adaptés pour être appliqués à des données environnementales et à la prédiction spatiale. Pourquoi l'apprentissage automatique ? Parce que la majorité des algorithmes d'apprentissage automatiques sont universels, adaptatifs, non-linéaires, robustes et efficaces pour la modélisation. Ils peuvent résoudre des problèmes de classification, de régression et de modélisation de densité de probabilités dans des espaces à haute dimension, composés de variables informatives spatialisées (« géo-features ») en plus des coordonnées géographiques. De plus, ils sont idéaux pour être implémentés en tant qu'outils d'aide à la décision pour des questions environnementales allant de la reconnaissance de pattern à la modélisation et la prédiction en passant par la cartographie automatique. Leur efficacité est comparable au modèles géostatistiques dans l'espace des coordonnées géographiques, mais ils sont indispensables pour des données à hautes dimensions incluant des géo-features. Les algorithmes d'apprentissage automatique les plus importants et les plus populaires sont présentés théoriquement et implémentés sous forme de logiciels pour les sciences environnementales. Les principaux algorithmes décrits sont le Perceptron multicouches (MultiLayer Perceptron, MLP) - l'algorithme le plus connu dans l'intelligence artificielle, le réseau de neurones de régression généralisée (General Regression Neural Networks, GRNN), le réseau de neurones probabiliste (Probabilistic Neural Networks, PNN), les cartes auto-organisées (SelfOrganized Maps, SOM), les modèles à mixture Gaussiennes (Gaussian Mixture Models, GMM), les réseaux à fonctions de base radiales (Radial Basis Functions Networks, RBF) et les réseaux à mixture de densité (Mixture Density Networks, MDN). Cette gamme d'algorithmes permet de couvrir des tâches variées telle que la classification, la régression ou l'estimation de densité de probabilité. L'analyse exploratoire des données (Exploratory Data Analysis, EDA) est le premier pas de toute analyse de données. Dans cette thèse les concepts d'analyse exploratoire de données spatiales (Exploratory Spatial Data Analysis, ESDA) sont traités selon l'approche traditionnelle de la géostatistique avec la variographie expérimentale et selon les principes de l'apprentissage automatique. La variographie expérimentale, qui étudie les relations entre pairs de points, est un outil de base pour l'analyse géostatistique de corrélations spatiales anisotropiques qui permet de détecter la présence de patterns spatiaux descriptible par une statistique. L'approche de l'apprentissage automatique pour l'ESDA est présentée à travers l'application de la méthode des k plus proches voisins qui est très simple et possède d'excellentes qualités d'interprétation et de visualisation. Une part importante de la thèse traite de sujets d'actualité comme la cartographie automatique de données spatiales. Le réseau de neurones de régression généralisée est proposé pour résoudre cette tâche efficacement. Les performances du GRNN sont démontrées par des données de Comparaison d'Interpolation Spatiale (SIC) de 2004 pour lesquelles le GRNN bat significativement toutes les autres méthodes, particulièrement lors de situations d'urgence. La thèse est composée de quatre chapitres : théorie, applications, outils logiciels et des exemples guidés. Une partie importante du travail consiste en une collection de logiciels : Machine Learning Office. Cette collection de logiciels a été développée durant les 15 dernières années et a été utilisée pour l'enseignement de nombreux cours, dont des workshops internationaux en Chine, France, Italie, Irlande et Suisse ainsi que dans des projets de recherche fondamentaux et appliqués. Les cas d'études considérés couvrent un vaste spectre de problèmes géoenvironnementaux réels à basse et haute dimensionnalité, tels que la pollution de l'air, du sol et de l'eau par des produits radioactifs et des métaux lourds, la classification de types de sols et d'unités hydrogéologiques, la cartographie des incertitudes pour l'aide à la décision et l'estimation de risques naturels (glissements de terrain, avalanches). Des outils complémentaires pour l'analyse exploratoire des données et la visualisation ont également été développés en prenant soin de créer une interface conviviale et facile à l'utilisation. Machine Learning for geospatial data: algorithms, software tools and case studies Abstract The thesis is devoted to the analysis, modeling and visualisation of spatial environmental data using machine learning algorithms. In a broad sense machine learning can be considered as a subfield of artificial intelligence. It mainly concerns with the development of techniques and algorithms that allow computers to learn from data. In this thesis machine learning algorithms are adapted to learn from spatial environmental data and to make spatial predictions. Why machine learning? In few words most of machine learning algorithms are universal, adaptive, nonlinear, robust and efficient modeling tools. They can find solutions for the classification, regression, and probability density modeling problems in high-dimensional geo-feature spaces, composed of geographical space and additional relevant spatially referenced features. They are well-suited to be implemented as predictive engines in decision support systems, for the purposes of environmental data mining including pattern recognition, modeling and predictions as well as automatic data mapping. They have competitive efficiency to the geostatistical models in low dimensional geographical spaces but are indispensable in high-dimensional geo-feature spaces. The most important and popular machine learning algorithms and models interesting for geo- and environmental sciences are presented in details: from theoretical description of the concepts to the software implementation. The main algorithms and models considered are the following: multi-layer perceptron (a workhorse of machine learning), general regression neural networks, probabilistic neural networks, self-organising (Kohonen) maps, Gaussian mixture models, radial basis functions networks, mixture density networks. This set of models covers machine learning tasks such as classification, regression, and density estimation. Exploratory data analysis (EDA) is initial and very important part of data analysis. In this thesis the concepts of exploratory spatial data analysis (ESDA) is considered using both traditional geostatistical approach such as_experimental variography and machine learning. Experimental variography is a basic tool for geostatistical analysis of anisotropic spatial correlations which helps to understand the presence of spatial patterns, at least described by two-point statistics. A machine learning approach for ESDA is presented by applying the k-nearest neighbors (k-NN) method which is simple and has very good interpretation and visualization properties. Important part of the thesis deals with a hot topic of nowadays, namely, an automatic mapping of geospatial data. General regression neural networks (GRNN) is proposed as efficient model to solve this task. Performance of the GRNN model is demonstrated on Spatial Interpolation Comparison (SIC) 2004 data where GRNN model significantly outperformed all other approaches, especially in case of emergency conditions. The thesis consists of four chapters and has the following structure: theory, applications, software tools, and how-to-do-it examples. An important part of the work is a collection of software tools - Machine Learning Office. Machine Learning Office tools were developed during last 15 years and was used both for many teaching courses, including international workshops in China, France, Italy, Ireland, Switzerland and for realizing fundamental and applied research projects. Case studies considered cover wide spectrum of the real-life low and high-dimensional geo- and environmental problems, such as air, soil and water pollution by radionuclides and heavy metals, soil types and hydro-geological units classification, decision-oriented mapping with uncertainties, natural hazards (landslides, avalanches) assessments and susceptibility mapping. Complementary tools useful for the exploratory data analysis and visualisation were developed as well. The software is user friendly and easy to use.
Resumo:
This report illustrates a comparative study of various joining methods involved in sheet metal production. In this report it shows the selection of joining methods, which includes comparing the advantages and disadvantages of a method over the other ones and choosing the best method for joining. On the basis of various joining process from references, a table is generated containing set of criterion that helps in evaluation of various sheet metal joining processes and in selecting the most suitable process for a particular product. Three products are selected and a comprehensive study of the joining methods is analyzed with the help of various parameters. The table thus is the main part of the analysis process of this study and can be advanced with the beneficial results. It helps in a better and easy understanding and comparing the various methods, which provides the foundation of this study and analysis. The suitability of the joining method for various types of cases of different sheet metal products can be tested with the help of this table. The sections of the created table display the requirements of manufacturing. The important factor has been considered and given focus in the table, as how the usage of these parameters is important in percentages according to particular or individual case. The analysis of the methods can be extended or altered by changing the parameters according to the constraint. The use of this table is demonstrated by pertaining the cases from sheet metal production.
Resumo:
Parasite population structure is often thought to be largely shaped by that of its host. In the case of a parasite with a complex life cycle, two host species, each with their own patterns of demography and migration, spread the parasite. However, the population structure of the parasite is predicted to resemble only that of the most vagile host species. In this study, we tested this prediction in the context of a vector-transmitted parasite. We sampled the haemosporidian parasite Polychromophilus melanipherus across its European range, together with its bat fly vector Nycteribia schmidlii and its host, the bent-winged bat Miniopterus schreibersii. Based on microsatellite analyses, the wingless vector, and not the bat host, was identified as the least structured population and should therefore be considered the most vagile host. Genetic distance matrices were compared for all three species based on a mitochondrial DNA fragment. Both host and vector populations followed an isolation-by-distance pattern across the Mediterranean, but not the parasite. Mantel tests found no correlation between the parasite and either the host or vector populations. We therefore found no support for our hypothesis; the parasite population structure matched neither vector nor host. Instead, we propose a model where the parasite's gene flow is represented by the added effects of host and vector dispersal patterns.