980 resultados para Instrumentation and Applied Physics (Formally ISU)


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Aim To evaluate the effects of using distinct alternative sets of climatic predictor variables on the performance, spatial predictions and future projections of species distribution models (SDMs) for rare plants in an arid environment. . Location Atacama and Peruvian Deserts, South America (18º30'S - 31º30'S, 0 - 3 000 m) Methods We modelled the present and future potential distributions of 13 species of Heliotropium sect. Cochranea, a plant group with a centre of diversity in the Atacama Desert. We developed and applied a sequential procedure, starting from climate monthly variables, to derive six alternative sets of climatic predictor variables. We used them to fit models with eight modelling techniques within an ensemble forecasting framework, and derived climate change projections for each of them. We evaluated the effects of using these alternative sets of predictor variables on performance, spatial predictions and projections of SDMs using Generalised Linear Mixed Models (GLMM). Results The use of distinct sets of climatic predictor variables did not have a significant effect on overall metrics of model performance, but had significant effects on present and future spatial predictions. Main conclusion Using different sets of climatic predictors can yield the same model fits but different spatial predictions of current and future species distributions. This represents a new form of uncertainty in model-based estimates of extinction risk that may need to be better acknowledged and quantified in future SDM studies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Defining the limits of an urban agglomeration is essential both for fundamental and applied studies in quantitative and theoretical geography. A simple and consistent way for defining such urban clusters is important for performing different statistical analysis and comparisons. Traditionally, agglomerations are defined using a rather qualitative approach based on various statistical measures. This definition varies generally from one country to another, and the data taken into account are different. In this paper, we explore the use of the City Clustering Algorithm (CCA) for the agglomeration definition in Switzerland. This algorithm provides a systemic and easy way to define an urban area based only on population data. The CCA allows the specification of the spatial resolution for defining the urban clusters. The results from different resolutions are compared and analysed, and the effect of filtering the data investigated. Different scales and parameters allow highlighting different phenomena. The study of Zipf's law using the visual rank-size rule shows that it is valid only for some specific urban clusters, inside a narrow range of the spatial resolution of the CCA. The scale where emergence of one main cluster occurs can also be found in the analysis using Zipf's law. The study of the urban clusters at different scales using the lacunarity measure - a complementary measure to the fractal dimension - allows to highlight the change of scale at a given range.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This report aims to analyse how European accounting standards (European System of Accounts ESA-95) are interpreted and applied to the public healthcare sector, from the standpoint of comparative law. Specifically, the study focuses on the application of ESA-95 to healthcare centres in the United Kingdom, France and Germany, with the aim of reaching useful conclusions for the Public Companies and Consortia (EPIC, for their initials in Catalan) in the Catalan Public Healthcare System.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In many species, the introduction of double-stranded RNA induces potent and specific gene silencing, referred to as RNA interference. This phenomenon, which is based on targeted degradation of mRNAs and occurs in almost any eukaryote, from trypanosomes to mice including plants and fungi, has sparked general interest from both applied and fundamental standpoints. RNA interference, which is currently used to investigate gene function in a variety of systems, is linked to natural resistance to viruses and transposon silencing, as if it were a primitive immune system involved in genome surveillance. Here, we review the mechanism of RNA interference in post-transcriptional gene silencing, its function in nature, its value for functional genomic analysis, and the modifications and improvements that may make it more efficient and inheritable. We also discuss the future directions of this versatile technique in both fundamental and applied science.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Testing the efficiency of Portland Cement Concrete (PCC) curing compounds is currently done following Test Method Iowa 901-D, May 2002. Concrete test specimens are prepared from mortar materials and are wet cured 5 hours before the curing compound is applied. All brands of curing compound submitted to the Iowa Department of Transportation are laboratory tested for comparative performance under the same test conditions. These conditions are different than field PCC paving conditions. Phase I tests followed Test Method Iowa 901-D, but modified the application amounts of the curing compound. Test results showed that the application of two coats of one-half thickness each increased efficiency compared to one full thickness coat. Phase II tests also used the modified application amounts, used a concrete mix (instead of a mortar mix) and applied curing compound a few minutes after molding. Measurements of losses, during spraying of the curing compound, were noted and were found to be significant. Test results showed that application amounts, testing techniques, concrete specimen mix design and spray losses do influence the curing compound efficiency. The significance of the spray losses indicates that the conventional test method being used (Iowa 901 D) should be revised.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We examined the moderating role of national identification in understanding when a focus on intergroup similarity versus difference on ingroup stereotypical traits-manipulated with scale anchors-leads to support for discriminatory immigration policies. In line with intergroup distinctiveness research, national identification moderated the similarity-difference manipulation effect. Low national identifiers supported discriminatory immigration policies more when intergroup difference rather than similarity was made salient, whereas the opposite pattern was found for high national identifiers: They trended toward being more discriminatory when similarity was made salient. The impact of assimilation expectations and national identity content on the findings is discussed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A dual model with a nonlinear proton Regge trajectory in the missing mass (M_X^2) channel is constructed. A background based on a direct-channel exotic trajectory, developed and applied earlier for the inclusive electron-proton cross section description in the nucleon resonance region, is used. The parameters of the model are determined from the extrapolations to earlier experiments. Predictions for the low-mass (2 < M_X^2 < 8GeV^2) diffraction dissociation cross sections at the LHC energies are given.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Résumé Cette thèse est consacrée à l'analyse, la modélisation et la visualisation de données environnementales à référence spatiale à l'aide d'algorithmes d'apprentissage automatique (Machine Learning). L'apprentissage automatique peut être considéré au sens large comme une sous-catégorie de l'intelligence artificielle qui concerne particulièrement le développement de techniques et d'algorithmes permettant à une machine d'apprendre à partir de données. Dans cette thèse, les algorithmes d'apprentissage automatique sont adaptés pour être appliqués à des données environnementales et à la prédiction spatiale. Pourquoi l'apprentissage automatique ? Parce que la majorité des algorithmes d'apprentissage automatiques sont universels, adaptatifs, non-linéaires, robustes et efficaces pour la modélisation. Ils peuvent résoudre des problèmes de classification, de régression et de modélisation de densité de probabilités dans des espaces à haute dimension, composés de variables informatives spatialisées (« géo-features ») en plus des coordonnées géographiques. De plus, ils sont idéaux pour être implémentés en tant qu'outils d'aide à la décision pour des questions environnementales allant de la reconnaissance de pattern à la modélisation et la prédiction en passant par la cartographie automatique. Leur efficacité est comparable au modèles géostatistiques dans l'espace des coordonnées géographiques, mais ils sont indispensables pour des données à hautes dimensions incluant des géo-features. Les algorithmes d'apprentissage automatique les plus importants et les plus populaires sont présentés théoriquement et implémentés sous forme de logiciels pour les sciences environnementales. Les principaux algorithmes décrits sont le Perceptron multicouches (MultiLayer Perceptron, MLP) - l'algorithme le plus connu dans l'intelligence artificielle, le réseau de neurones de régression généralisée (General Regression Neural Networks, GRNN), le réseau de neurones probabiliste (Probabilistic Neural Networks, PNN), les cartes auto-organisées (SelfOrganized Maps, SOM), les modèles à mixture Gaussiennes (Gaussian Mixture Models, GMM), les réseaux à fonctions de base radiales (Radial Basis Functions Networks, RBF) et les réseaux à mixture de densité (Mixture Density Networks, MDN). Cette gamme d'algorithmes permet de couvrir des tâches variées telle que la classification, la régression ou l'estimation de densité de probabilité. L'analyse exploratoire des données (Exploratory Data Analysis, EDA) est le premier pas de toute analyse de données. Dans cette thèse les concepts d'analyse exploratoire de données spatiales (Exploratory Spatial Data Analysis, ESDA) sont traités selon l'approche traditionnelle de la géostatistique avec la variographie expérimentale et selon les principes de l'apprentissage automatique. La variographie expérimentale, qui étudie les relations entre pairs de points, est un outil de base pour l'analyse géostatistique de corrélations spatiales anisotropiques qui permet de détecter la présence de patterns spatiaux descriptible par une statistique. L'approche de l'apprentissage automatique pour l'ESDA est présentée à travers l'application de la méthode des k plus proches voisins qui est très simple et possède d'excellentes qualités d'interprétation et de visualisation. Une part importante de la thèse traite de sujets d'actualité comme la cartographie automatique de données spatiales. Le réseau de neurones de régression généralisée est proposé pour résoudre cette tâche efficacement. Les performances du GRNN sont démontrées par des données de Comparaison d'Interpolation Spatiale (SIC) de 2004 pour lesquelles le GRNN bat significativement toutes les autres méthodes, particulièrement lors de situations d'urgence. La thèse est composée de quatre chapitres : théorie, applications, outils logiciels et des exemples guidés. Une partie importante du travail consiste en une collection de logiciels : Machine Learning Office. Cette collection de logiciels a été développée durant les 15 dernières années et a été utilisée pour l'enseignement de nombreux cours, dont des workshops internationaux en Chine, France, Italie, Irlande et Suisse ainsi que dans des projets de recherche fondamentaux et appliqués. Les cas d'études considérés couvrent un vaste spectre de problèmes géoenvironnementaux réels à basse et haute dimensionnalité, tels que la pollution de l'air, du sol et de l'eau par des produits radioactifs et des métaux lourds, la classification de types de sols et d'unités hydrogéologiques, la cartographie des incertitudes pour l'aide à la décision et l'estimation de risques naturels (glissements de terrain, avalanches). Des outils complémentaires pour l'analyse exploratoire des données et la visualisation ont également été développés en prenant soin de créer une interface conviviale et facile à l'utilisation. Machine Learning for geospatial data: algorithms, software tools and case studies Abstract The thesis is devoted to the analysis, modeling and visualisation of spatial environmental data using machine learning algorithms. In a broad sense machine learning can be considered as a subfield of artificial intelligence. It mainly concerns with the development of techniques and algorithms that allow computers to learn from data. In this thesis machine learning algorithms are adapted to learn from spatial environmental data and to make spatial predictions. Why machine learning? In few words most of machine learning algorithms are universal, adaptive, nonlinear, robust and efficient modeling tools. They can find solutions for the classification, regression, and probability density modeling problems in high-dimensional geo-feature spaces, composed of geographical space and additional relevant spatially referenced features. They are well-suited to be implemented as predictive engines in decision support systems, for the purposes of environmental data mining including pattern recognition, modeling and predictions as well as automatic data mapping. They have competitive efficiency to the geostatistical models in low dimensional geographical spaces but are indispensable in high-dimensional geo-feature spaces. The most important and popular machine learning algorithms and models interesting for geo- and environmental sciences are presented in details: from theoretical description of the concepts to the software implementation. The main algorithms and models considered are the following: multi-layer perceptron (a workhorse of machine learning), general regression neural networks, probabilistic neural networks, self-organising (Kohonen) maps, Gaussian mixture models, radial basis functions networks, mixture density networks. This set of models covers machine learning tasks such as classification, regression, and density estimation. Exploratory data analysis (EDA) is initial and very important part of data analysis. In this thesis the concepts of exploratory spatial data analysis (ESDA) is considered using both traditional geostatistical approach such as_experimental variography and machine learning. Experimental variography is a basic tool for geostatistical analysis of anisotropic spatial correlations which helps to understand the presence of spatial patterns, at least described by two-point statistics. A machine learning approach for ESDA is presented by applying the k-nearest neighbors (k-NN) method which is simple and has very good interpretation and visualization properties. Important part of the thesis deals with a hot topic of nowadays, namely, an automatic mapping of geospatial data. General regression neural networks (GRNN) is proposed as efficient model to solve this task. Performance of the GRNN model is demonstrated on Spatial Interpolation Comparison (SIC) 2004 data where GRNN model significantly outperformed all other approaches, especially in case of emergency conditions. The thesis consists of four chapters and has the following structure: theory, applications, software tools, and how-to-do-it examples. An important part of the work is a collection of software tools - Machine Learning Office. Machine Learning Office tools were developed during last 15 years and was used both for many teaching courses, including international workshops in China, France, Italy, Ireland, Switzerland and for realizing fundamental and applied research projects. Case studies considered cover wide spectrum of the real-life low and high-dimensional geo- and environmental problems, such as air, soil and water pollution by radionuclides and heavy metals, soil types and hydro-geological units classification, decision-oriented mapping with uncertainties, natural hazards (landslides, avalanches) assessments and susceptibility mapping. Complementary tools useful for the exploratory data analysis and visualisation were developed as well. The software is user friendly and easy to use.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present a new phenomenological approach to nucleation, based on the combination of the extended modified liquid drop model and dynamical nucleation theory. The new model proposes a new cluster definition, which properly includes the effect of fluctuations, and it is consistent both thermodynamically and kinetically. The model is able to predict successfully the free energy of formation of the critical nucleus, using only macroscopic thermodynamic properties. It also accounts for the spinodal and provides excellent agreement with the result of recent simulations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The existence of a new class of inclined periodic orbits of the collision restricted three-body problem is shown. The symmetric periodic solutions found are perturbations of elliptic kepler orbits and they exist only for special values of the inclination and are related to the motion of a satellite around an oblate planet

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND: Recent data suggest that beta-blockers can be beneficial in subgroups of patients with chronic heart failure (CHF). For metoprolol and carvedilol, an increase in ejection fraction has been shown and favorable effects on the myocardial remodeling process have been reported in some studies. We examined the effects of bisoprolol fumarate on exercise capacity and left ventricular volume with magnetic resonance imaging (MRI) and applied a novel high-resolution MRI tagging technique to determine myocardial rotation and relaxation velocity. METHODS: Twenty-eight patients (mean age, 57 +/- 11 years; mean ejection fraction, 26 +/- 6%) were randomized to bisoprolol fumarate (n = 13) or to placebo therapy (n = 15). The dosage of the drugs was titrated to match that of the the Cardiac Insufficiency Bisoprolol Study protocol. Hemodynamic and gas exchange responses to exercise, MRI measurements of left ventricular end-systolic and end-diastolic volumes and ejection fraction, and left ventricular rotation and relaxation velocities were measured before the administration of the drug and 6 and 12 months later. RESULTS: After 1 year, heart rate was reduced in the bisoprolol fumarate group both at rest (81 +/- 12 before therapy versus 61 +/- 11 after therapy; P <.01) and peak exercise (144 +/- 20 before therapy versus 127 +/- 17 after therapy; P <.01), which indicated a reduction in sympathetic drive. No differences were observed in heart rate responses in the placebo group. No differences were observed within or between groups in peak oxygen uptake, although work rate achieved was higher (117.9 +/- 36 watts versus 146.1 +/- 33 watts; P <.05) and exercise time tended to be higher (9.1 +/- 1.7 minutes versus 11.4 +/- 2.8 minutes; P =.06) in the bisoprolol fumarate group. A trend for a reduction in left ventricular end-diastolic volume (-54 mL) and left ventricular end-systolic volume (-62 mL) in the bisoprolol fumarate group occurred after 1 year. Ejection fraction was higher in the bisoprolol fumarate group (25.0 +/- 7 versus 36.2 +/- 9%; P <.05), and the placebo group remained unchanged. Most changes in volume and ejection fraction occurred during the latter 6 months of treatment. With myocardial tagging, insignificant reductions in left ventricular rotation velocity were observed in both groups, whereas relaxation velocity was reduced only after bisoprolol fumarate therapy (by 39%; P <.05). CONCLUSION: One year of bisoprolol fumarate therapy resulted in an improvement in exercise capacity, showed trends for reductions in end-diastolic and end-systolic volumes, increased ejection fraction, and significantly reduced relaxation velocity. Although these results generally confirm the beneficial effects of beta-blockade in patients with chronic heart failure, they show differential effects on systolic and diastolic function.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Within a developing organism, cells require information on where they are in order to differentiate into the correct cell-type. Pattern formation is the process by which cells acquire and process positional cues and thus determine their fate. This can be achieved by the production and release of a diffusible signaling molecule, called a morphogen, which forms a concentration gradient: exposure to different morphogen levels leads to the activation of specific signaling pathways. Thus, in response to the morphogen gradient, cells start to express different sets of genes, forming domains characterized by a unique combination of differentially expressed genes. As a result, a pattern of cell fates and specification emerges.Though morphogens have been known for decades, it is not yet clear how these gradients form and are interpreted in order to yield highly robust patterns of gene expression. During my PhD thesis, I investigated the properties of Bicoid (Bcd) and Decapentaplegic (Dpp), two morphogens involved in the patterning of the anterior-posterior axis of Drosophila embryo and wing primordium, respectively. In particular, I have been interested in understanding how the pattern proportions are maintained across embryos of different sizes or within a growing tissue. This property is commonly referred to as scaling and is essential for yielding functional organs or organisms. In order to tackle these questions, I analysed fluorescence images showing the pattern of gene expression domains in the early embryo and wing imaginal disc. After characterizing the extent of these domains in a quantitative and systematic manner, I introduced and applied a new scaling measure in order to assess how well proportions are maintained. I found that scaling emerged as a universal property both in early embryos (at least far away from the Bcd source) and in wing imaginal discs (across different developmental stages). Since we were also interested in understanding the mechanisms underlying scaling and how it is transmitted from the morphogen to the target genes down in the signaling cascade, I also quantified scaling in mutant flies where this property could be disrupted. While scaling is largely conserved in embryos with altered bcd dosage, my modeling suggests that Bcd trapping by the nuclei as well as pre-steady state decoding of the morphogen gradient are essential to ensure precise and scaled patterning of the Bcd signaling cascade. In the wing imaginal disc, it appears that as the disc grows, the Dpp response expands and scales with the tissue size. Interestingly, scaling is not perfect at all positions in the field. The scaling of the target gene domains is best where they have a function; Spalt, for example, scales best at the position in the anterior compartment where it helps to form one of the anterior veins of the wing. Analysis of mutants for pentagone, a transcriptional target of Dpp that encodes a secreted feedback regulator of the pathway, indicates that Pentagone plays a key role in scaling the Dpp gradient activity.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Usingof belt for high precision applications has become appropriate because of the rapid development in motor and drive technology as well as the implementation of timing belts in servo systems. Belt drive systems provide highspeed and acceleration, accurate and repeatable motion with high efficiency, long stroke lengths and low cost. Modeling of a linear belt-drive system and designing its position control are examined in this work. Friction phenomena and position dependent elasticity of the belt are analyzed. Computer simulated results show that the developed model is adequate. The PID control for accurate tracking control and accurate position control is designed and applied to the real test setup. Both the simulation and the experimental results demonstrate that the designed controller meets the specified performance specifications.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A simple non-targeted differential HPLC-APCI/MS approach has been developed in order to survey metabolome modifications that occur in the leaves of Arabidopsis thaliana following wound-induced stress. The wound-induced accumulation of metabolites, particularly oxylipins, was evaluated by HPLC-MS analysis of crude leaf extracts. A generic, rapid and reproducible pressure liquid extraction procedure was developed for the analysis of restricted leaf samples without the need for specific sample preparation. The presence of various oxylipins was determined by head-to-head comparison of the HPLC-MS data, filtered with a component detection algorithm, and automatically compared with the aid of software searching for small differences in similar HPLC-MS profiles. Repeatability was verified in several specimens belonging to different series. Wound-inducible jasmonates were efficiently highlighted by this non-targeted approach without the need for complex sample preparation as is the case for the 'oxylipin signature' procedure based on GC-MS. Furthermore this HPLC-MS screening technique allowed the isolation of induced compounds for further characterisation by capillary-scale NMR (CapNMR) after HPLC scale-up. In this paper, the screening method is described and applied to illustrate its potential for monitoring polar and non-polar stress-induced constituents as well as its use in combination with CapNMR for the structural assignment of wound-induced compounds of interest