980 resultados para Electronic spreadsheets -- Software
Resumo:
This manual describes how to use the Iowa Bridge Backwater software. It also documents the methods and equations used for the calculations. The main body describes how to use the software and the appendices cover technical aspects. The Bridge Backwater software performs 5 main tasks: Design Discharge Estimation; Stream Rating Curves; Floodway Encroachment; Bridge Backwater; and Bridge Scour. The intent of this program is to provide a simplified method for analysis of bridge backwater for rural structures located in areas with low flood damage potential. The software is written in Microsoft Visual Basic 6.0. It will run under Windows 95 or newer versions (i.e. Windows 98, NT, 2000, XP and later).
Resumo:
The co-cultivation of fungi has recently been described as a promising strategy to induce the production of novel metabolites through possible gene activation. A large screening of fungal co-cultures in solid media has identified an unusual long-distance growth inhibition between Trichophyton rubrum and Bionectria ochroleuca. To study metabolite induction in this particular fungal interaction, differential LC-MS-based metabolomics was performed on pure strain cultures and on their co-cultures. The comparison of the resulting fingerprints highlighted five de novo induced compounds, which were purified using software-oriented semipreparative HPLC-MS. One metabolite was successfully identified as 4″-hydroxysulfoxy-2,2″-dimethylthielavin P (a substituted trimer of 3,5-dimethylorsellinic acid). The nonsulfated form, as well as three other related compounds, were found in the pure strain culture of B. ochroleuca.
Resumo:
OBJECTIVE: Our aim was to evaluate a fluorescence-based enhanced-reality system to assess intestinal viability in a laparoscopic mesenteric ischemia model. MATERIALS AND METHODS: A small bowel loop was exposed, and 3 to 4 mesenteric vessels were clipped in 6 pigs. Indocyanine green (ICG) was administered intravenously 15 minutes later. The bowel was illuminated with an incoherent light source laparoscope (D-light-P, KarlStorz). The ICG fluorescence signal was analyzed with Ad Hoc imaging software (VR-RENDER), which provides a digital perfusion cartography that was superimposed to the intraoperative laparoscopic image [augmented reality (AR) synthesis]. Five regions of interest (ROIs) were marked under AR guidance (1, 2a-2b, 3a-3b corresponding to the ischemic, marginal, and vascularized zones, respectively). One hour later, capillary blood samples were obtained by puncturing the bowel serosa at the identified ROIs and lactates were measured using the EDGE analyzer. A surgical biopsy of each intestinal ROI was sent for mitochondrial respiratory rate assessment and for metabolites quantification. RESULTS: Mean capillary lactate levels were 3.98 (SD = 1.91) versus 1.05 (SD = 0.46) versus 0.74 (SD = 0.34) mmol/L at ROI 1 versus 2a-2b (P = 0.0001) versus 3a-3b (P = 0.0001), respectively. Mean maximal mitochondrial respiratory rate was 104.4 (±21.58) pmolO2/second/mg at the ROI 1 versus 191.1 ± 14.48 (2b, P = 0.03) versus 180.4 ± 16.71 (3a, P = 0.02) versus 199.2 ± 25.21 (3b, P = 0.02). Alanine, choline, ethanolamine, glucose, lactate, myoinositol, phosphocholine, sylloinositol, and valine showed statistically significant different concentrations between ischemic and nonischemic segments. CONCLUSIONS: Fluorescence-based AR may effectively detect the boundary between the ischemic and the vascularized zones in this experimental model.
Resumo:
The M-Coffee server is a web server that makes it possible to compute multiple sequence alignments (MSAs) by running several MSA methods and combining their output into one single model. This allows the user to simultaneously run all his methods of choice without having to arbitrarily choose one of them. The MSA is delivered along with a local estimation of its consistency with the individual MSAs it was derived from. The computation of the consensus multiple alignment is carried out using a special mode of the T-Coffee package [Notredame, Higgins and Heringa (T-Coffee: a novel method for fast and accurate multiple sequence alignment. J. Mol. Biol. 2000; 302: 205-217); Wallace, O'Sullivan, Higgins and Notredame (M-Coffee: combining multiple sequence alignment methods with T-Coffee. Nucleic Acids Res. 2006; 34: 1692-1699)] Given a set of sequences (DNA or proteins) in FASTA format, M-Coffee delivers a multiple alignment in the most common formats. M-Coffee is a freeware open source package distributed under a GPL license and it is available either as a standalone package or as a web service from www.tcoffee.org.
Resumo:
En la actualidad las tecnologías de la información son utilizadas en todos los ámbitos empresariales. Desde sistemas de gestión (ERPs) pasando por la gestión documental, el análisis de información con sistema de Bussines Intelligence, pudiendo incluso convertirse en toda una nueva plataforma para proveer a las empresas de nuevos canales de venta, como es el caso deInternet.De la necesidad inicial de nuestro cliente en comenzar a expandirse por un nuevo canal de venta para poder llegar a nuevos mercados y diversificar sus clientes se inicia la motivación de este TFC.Dadas las características actuales de las tecnologías de la información e internet, estas conforman un binomio perfecto para definir este TFC que trata todos los aspectos necesarios para llegar a obtener un producto final como es un portal web inmobiliario adaptado a los requisitos demandados por los usuarios actuales de Internet.
Resumo:
Résumé Cette thèse est consacrée à l'analyse, la modélisation et la visualisation de données environnementales à référence spatiale à l'aide d'algorithmes d'apprentissage automatique (Machine Learning). L'apprentissage automatique peut être considéré au sens large comme une sous-catégorie de l'intelligence artificielle qui concerne particulièrement le développement de techniques et d'algorithmes permettant à une machine d'apprendre à partir de données. Dans cette thèse, les algorithmes d'apprentissage automatique sont adaptés pour être appliqués à des données environnementales et à la prédiction spatiale. Pourquoi l'apprentissage automatique ? Parce que la majorité des algorithmes d'apprentissage automatiques sont universels, adaptatifs, non-linéaires, robustes et efficaces pour la modélisation. Ils peuvent résoudre des problèmes de classification, de régression et de modélisation de densité de probabilités dans des espaces à haute dimension, composés de variables informatives spatialisées (« géo-features ») en plus des coordonnées géographiques. De plus, ils sont idéaux pour être implémentés en tant qu'outils d'aide à la décision pour des questions environnementales allant de la reconnaissance de pattern à la modélisation et la prédiction en passant par la cartographie automatique. Leur efficacité est comparable au modèles géostatistiques dans l'espace des coordonnées géographiques, mais ils sont indispensables pour des données à hautes dimensions incluant des géo-features. Les algorithmes d'apprentissage automatique les plus importants et les plus populaires sont présentés théoriquement et implémentés sous forme de logiciels pour les sciences environnementales. Les principaux algorithmes décrits sont le Perceptron multicouches (MultiLayer Perceptron, MLP) - l'algorithme le plus connu dans l'intelligence artificielle, le réseau de neurones de régression généralisée (General Regression Neural Networks, GRNN), le réseau de neurones probabiliste (Probabilistic Neural Networks, PNN), les cartes auto-organisées (SelfOrganized Maps, SOM), les modèles à mixture Gaussiennes (Gaussian Mixture Models, GMM), les réseaux à fonctions de base radiales (Radial Basis Functions Networks, RBF) et les réseaux à mixture de densité (Mixture Density Networks, MDN). Cette gamme d'algorithmes permet de couvrir des tâches variées telle que la classification, la régression ou l'estimation de densité de probabilité. L'analyse exploratoire des données (Exploratory Data Analysis, EDA) est le premier pas de toute analyse de données. Dans cette thèse les concepts d'analyse exploratoire de données spatiales (Exploratory Spatial Data Analysis, ESDA) sont traités selon l'approche traditionnelle de la géostatistique avec la variographie expérimentale et selon les principes de l'apprentissage automatique. La variographie expérimentale, qui étudie les relations entre pairs de points, est un outil de base pour l'analyse géostatistique de corrélations spatiales anisotropiques qui permet de détecter la présence de patterns spatiaux descriptible par une statistique. L'approche de l'apprentissage automatique pour l'ESDA est présentée à travers l'application de la méthode des k plus proches voisins qui est très simple et possède d'excellentes qualités d'interprétation et de visualisation. Une part importante de la thèse traite de sujets d'actualité comme la cartographie automatique de données spatiales. Le réseau de neurones de régression généralisée est proposé pour résoudre cette tâche efficacement. Les performances du GRNN sont démontrées par des données de Comparaison d'Interpolation Spatiale (SIC) de 2004 pour lesquelles le GRNN bat significativement toutes les autres méthodes, particulièrement lors de situations d'urgence. La thèse est composée de quatre chapitres : théorie, applications, outils logiciels et des exemples guidés. Une partie importante du travail consiste en une collection de logiciels : Machine Learning Office. Cette collection de logiciels a été développée durant les 15 dernières années et a été utilisée pour l'enseignement de nombreux cours, dont des workshops internationaux en Chine, France, Italie, Irlande et Suisse ainsi que dans des projets de recherche fondamentaux et appliqués. Les cas d'études considérés couvrent un vaste spectre de problèmes géoenvironnementaux réels à basse et haute dimensionnalité, tels que la pollution de l'air, du sol et de l'eau par des produits radioactifs et des métaux lourds, la classification de types de sols et d'unités hydrogéologiques, la cartographie des incertitudes pour l'aide à la décision et l'estimation de risques naturels (glissements de terrain, avalanches). Des outils complémentaires pour l'analyse exploratoire des données et la visualisation ont également été développés en prenant soin de créer une interface conviviale et facile à l'utilisation. Machine Learning for geospatial data: algorithms, software tools and case studies Abstract The thesis is devoted to the analysis, modeling and visualisation of spatial environmental data using machine learning algorithms. In a broad sense machine learning can be considered as a subfield of artificial intelligence. It mainly concerns with the development of techniques and algorithms that allow computers to learn from data. In this thesis machine learning algorithms are adapted to learn from spatial environmental data and to make spatial predictions. Why machine learning? In few words most of machine learning algorithms are universal, adaptive, nonlinear, robust and efficient modeling tools. They can find solutions for the classification, regression, and probability density modeling problems in high-dimensional geo-feature spaces, composed of geographical space and additional relevant spatially referenced features. They are well-suited to be implemented as predictive engines in decision support systems, for the purposes of environmental data mining including pattern recognition, modeling and predictions as well as automatic data mapping. They have competitive efficiency to the geostatistical models in low dimensional geographical spaces but are indispensable in high-dimensional geo-feature spaces. The most important and popular machine learning algorithms and models interesting for geo- and environmental sciences are presented in details: from theoretical description of the concepts to the software implementation. The main algorithms and models considered are the following: multi-layer perceptron (a workhorse of machine learning), general regression neural networks, probabilistic neural networks, self-organising (Kohonen) maps, Gaussian mixture models, radial basis functions networks, mixture density networks. This set of models covers machine learning tasks such as classification, regression, and density estimation. Exploratory data analysis (EDA) is initial and very important part of data analysis. In this thesis the concepts of exploratory spatial data analysis (ESDA) is considered using both traditional geostatistical approach such as_experimental variography and machine learning. Experimental variography is a basic tool for geostatistical analysis of anisotropic spatial correlations which helps to understand the presence of spatial patterns, at least described by two-point statistics. A machine learning approach for ESDA is presented by applying the k-nearest neighbors (k-NN) method which is simple and has very good interpretation and visualization properties. Important part of the thesis deals with a hot topic of nowadays, namely, an automatic mapping of geospatial data. General regression neural networks (GRNN) is proposed as efficient model to solve this task. Performance of the GRNN model is demonstrated on Spatial Interpolation Comparison (SIC) 2004 data where GRNN model significantly outperformed all other approaches, especially in case of emergency conditions. The thesis consists of four chapters and has the following structure: theory, applications, software tools, and how-to-do-it examples. An important part of the work is a collection of software tools - Machine Learning Office. Machine Learning Office tools were developed during last 15 years and was used both for many teaching courses, including international workshops in China, France, Italy, Ireland, Switzerland and for realizing fundamental and applied research projects. Case studies considered cover wide spectrum of the real-life low and high-dimensional geo- and environmental problems, such as air, soil and water pollution by radionuclides and heavy metals, soil types and hydro-geological units classification, decision-oriented mapping with uncertainties, natural hazards (landslides, avalanches) assessments and susceptibility mapping. Complementary tools useful for the exploratory data analysis and visualisation were developed as well. The software is user friendly and easy to use.
Resumo:
OBJECTIVE: To assess the accuracy of a semiautomated 3D volume reconstruction method for organ volume measurement by postmortem MRI. METHODS: This prospective study was approved by the institutional review board and the infants' parents gave their consent. Postmortem MRI was performed in 16 infants (1 month to 1 year of age) at 1.5 T within 48 h of their sudden death. Virtual organ volumes were estimated using the Myrian software. Real volumes were recorded at autopsy by water displacement. The agreement between virtual and real volumes was quantified following the Bland and Altman's method. RESULTS: There was a good agreement between virtual and real volumes for brain (mean difference: -0.03% (-13.6 to +7.1)), liver (+8.3% (-9.6 to +26.2)) and lungs (+5.5% (-26.6 to +37.6)). For kidneys, spleen and thymus, the MRI/autopsy volume ratio was close to 1 (kidney: 0.87±0.1; spleen: 0.99±0.17; thymus: 0.94±0.25), but with a less good agreement. For heart, the MRI/real volume ratio was 1.29±0.76, possibly due to the presence of residual blood within the heart. The virtual volumes of adrenal glands were significantly underestimated (p=0.04), possibly due to their very small size during the first year of life. The percentage of interobserver and intraobserver variation was lower or equal to 10%, but for thymus (15.9% and 12.6%, respectively) and adrenal glands (69% and 25.9%). CONCLUSIONS: Virtual volumetry may provide significant information concerning the macroscopic features of the main organs and help pathologists in sampling organs that are more likely to yield histological findings.
Resumo:
Investigaremos cómo las redes de colaboración y el softwarelibre permiten adaptar el centro educativo al entorno, cómo pueden ayudar al centro a potenciar la formación profesional y garantizar la durabilidad de las acciones, con el objetivo que perdure el conocimiento y la propia red de colaboración para una mejora educativa.
Resumo:
Trabajo que muestra, haciendo uso de tecnologías libres y basándonos en sistemas operativos abiertos, cómo es posible mantener un nivel alto de trabajo para una empresa que se dedica a implementar y realizar desarrollos en tecnologías de software libre. Se muestra el montaje de un laboratorio de desarrollo que nos va a permitir entender el funcionamiento y la implementación tanto de GNU/Linux como del software que se basa en él dentro de la infraestructura de la empresa.
Resumo:
Disseny tant a nivell de hardware com de software d’un cap mòbil amb tecnologia led RGBW controlat pel protocol DMX512. Aquest projecte es limita al disseny i a la realització de tots els elements de software i hardware necessaris per crear un prototipus de cap mòbil que pugui ser controlat mitjançant el protocol DMX. Per tant, està encarat completament cap a la vessant electrònica i de programació sense fer referència als materials i elements constructius utilitzats o sobre el disseny i estètica del producte
Resumo:
This report describes the first phase in a project to develop an electronic reference library (ERL) to help Iowa transportation officials efficiently access information in critical and heavily used documents. These documents include Standard Specifications for Bridge and Highway Construction (hereinafter called Standard Specifications), design manuals, standard drawings, the Construction Manual, and Material Instruction Memoranda (hereinafter called Material IMs). Additional items that could be included to enhance the ERL include phone books, letting dates, Internet links, computer programs distributed by the Iowa Department of Transportation (DOT), and local specifications, such as the Urban Standard Specifications of Public Improvements. All cross-references should be hyper linked, and a search engine should be provided. Revisions noted in the General Supplemental Specifications (hereinafter called the Supplemental Specifications) should be incorporated into the text of the Standard Specifications. The Standard Specifications should refer to related sections of other documents, and there should be reciprocal hyper links in those other documents. These features would speed research on critical issues and save staff time. A master plan and a pilot version were both developed in this first phase of the ERL.
Resumo:
There is increasing evidence that the microcirculation plays an important role in the pathogenesis of cardiovascular diseases. Changes in retinal vascular caliber reflect early microvascular disease and predict incident cardiovascular events. We performed a genome-wide association study to identify genetic variants associated with retinal vascular caliber. We analyzed data from four population-based discovery cohorts with 15,358 unrelated Caucasian individuals, who are members of the Cohort for Heart and Aging Research in Genomic Epidemiology (CHARGE) consortium, and replicated findings in four independent Caucasian cohorts (n = 6,652). All participants had retinal photography and retinal arteriolar and venular caliber measured from computer software. In the discovery cohorts, 179 single nucleotide polymorphisms (SNP) spread across five loci were significantly associated (p<5.0×10(-8)) with retinal venular caliber, but none showed association with arteriolar caliber. Collectively, these five loci explain 1.0%-3.2% of the variation in retinal venular caliber. Four out of these five loci were confirmed in independent replication samples. In the combined analyses, the top SNPs at each locus were: rs2287921 (19q13; p = 1.61×10(-25), within the RASIP1 locus), rs225717 (6q24; p = 1.25×10(-16), adjacent to the VTA1 and NMBR loci), rs10774625 (12q24; p = 2.15×10(-13), in the region of ATXN2,SH2B3 and PTPN11 loci), and rs17421627 (5q14; p = 7.32×10(-16), adjacent to the MEF2C locus). In two independent samples, locus 12q24 was also associated with coronary heart disease and hypertension. Our population-based genome-wide association study demonstrates four novel loci associated with retinal venular caliber, an endophenotype of the microcirculation associated with clinical cardiovascular disease. These data provide further insights into the contribution and biological mechanisms of microcirculatory changes that underlie cardiovascular disease.
Resumo:
The objective of this work was to build mock-ups of complete yerba mate plants in several stages of development, using the InterpolMate software, and to compute photosynthesis on the interpolated structure. The mock-ups of yerba-mate were first built in the VPlants software for three growth stages. Male and female plants grown in two contrasting environments (monoculture and forest understory) were considered. To model the dynamic 3D architecture of yerba-mate plants during the biennial growth interval between two subsequent prunings, data sets of branch development collected in 38 dates were used. The estimated values obtained from the mock-ups, including leaf photosynthesis and sexual dimorphism, are very close to those observed in the field. However, this similarity was limited to reconstructions that included growth units from original data sets. The modeling of growth dynamics enables the estimation of photosynthesis for the entire yerba mate plant, which is not easily measurable in the field. The InterpolMate software is efficient for building yerba mate mock-ups.
Resumo:
Rhea (http://www.ebi.ac.uk/rhea) is a comprehensive resource of expert-curated biochemical reactions. Rhea provides a non-redundant set of chemical transformations for use in a broad spectrum of applications, including metabolic network reconstruction and pathway inference. Rhea includes enzyme-catalyzed reactions (covering the IUBMB Enzyme Nomenclature list), transport reactions and spontaneously occurring reactions. Rhea reactions are described using chemical species from the Chemical Entities of Biological Interest ontology (ChEBI) and are stoichiometrically balanced for mass and charge. They are extensively manually curated with links to source literature and other public resources on metabolism including enzyme and pathway databases. This cross-referencing facilitates the mapping and reconciliation of common reactions and compounds between distinct resources, which is a common first step in the reconstruction of genome scale metabolic networks and models.