746 resultados para parallel kinematic machine


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Emerging evidence indicates that angiogenesis and immunosuppression frequently occur simultaneously in response to diverse stimuli. Here, we describe a fundamental biological programme that involves the activation of both angiogenesis and immunosuppressive responses, often through the same cell types or soluble factors. We suggest that the initiation of these responses is part of a physiological and homeostatic tissue repair programme, which can be co-opted in pathological states, notably by tumours. This view can help to devise new cancer therapies and may have implications for aseptic tissue injury, pathogen-mediated tissue destruction, chronic inflammation and even reproduction.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Traffic noise monitoring using FHWA's Demonstration Projects Division Mobile Noise Laboratory at free field, single wall and parallel barrier site on I-380 in Evansdale, Iowa is described. Access to I-380 prior to its being open to traffic afforded a controlled pass-by monitoring phase involving different vehicle types. A subsequent second phase entailed identical measurement methodology to monitor "real world" I-380 traffic noise. Phase I data indicated increases in noise were significant under the parallel barrier conditions for light duty vehicles operating in the far lane. Phase II results showed that the actual I-380 traffic mix largely offset the earlier observed effect, but minor increases in traffic noise under the parallel system were noted. These differences in noise barrier system effectiveness are judged to be insignificant at this particular study location.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The differentiation of CD4(+) or CD8(+) T cells following priming of naive cells is central in the establishment of the immune response against pathogens or tumors. However, our understanding of this complex process and the significance of the multiple subsets of differentiation remains controversial. Gene expression profiling has opened new directions of investigation in immunobiology. Nonetheless, the need for substantial amount of biological material often limits its application range. In this study, we have developed procedures to perform microarray analysis on amplified cDNA from low numbers of cells, including primary T lymphocytes, and applied this technology to the study of CD4 and CD8 lineage differentiation. Gene expression profiling was performed on samples of 1000 cells from 10 different subpopulations, defining the major stages of post-thymic CD4(+) or CD8(+) T cell differentiation. Surprisingly, our data revealed that while CD4(+) and CD8(+) T cell gene expression programs diverge at early stages of differentiation, they become increasingly similar as cells reach a late differentiation stage. This suggests that functional heterogeneity between Ag experienced CD4(+) and CD8(+) T cells is more likely to be located early during post-thymic differentiation, and that late stages of differentiation may represent a common end in the development of T-lymphocytes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Automatic environmental monitoring networks enforced by wireless communication technologies provide large and ever increasing volumes of data nowadays. The use of this information in natural hazard research is an important issue. Particularly useful for risk assessment and decision making are the spatial maps of hazard-related parameters produced from point observations and available auxiliary information. The purpose of this article is to present and explore the appropriate tools to process large amounts of available data and produce predictions at fine spatial scales. These are the algorithms of machine learning, which are aimed at non-parametric robust modelling of non-linear dependencies from empirical data. The computational efficiency of the data-driven methods allows producing the prediction maps in real time which makes them superior to physical models for the operational use in risk assessment and mitigation. Particularly, this situation encounters in spatial prediction of climatic variables (topo-climatic mapping). In complex topographies of the mountainous regions, the meteorological processes are highly influenced by the relief. The article shows how these relations, possibly regionalized and non-linear, can be modelled from data using the information from digital elevation models. The particular illustration of the developed methodology concerns the mapping of temperatures (including the situations of Föhn and temperature inversion) given the measurements taken from the Swiss meteorological monitoring network. The range of the methods used in the study includes data-driven feature selection, support vector algorithms and artificial neural networks.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Although various foot models were proposed for kinematics assessment using skin makers, no objective justification exists for the foot segmentations. This study proposed objective kinematic criteria to define which foot joints are relevant (dominant) in skin markers assessments. Among the studied joints, shank-hindfoot, hindfoot-midfoot and medial-lateral forefoot joints were found to have larger mobility than flexibility of their neighbour bonesets. The amplitude and pattern consistency of these joint angles confirmed their dominancy. Nevertheless, the consistency of the medial-lateral forefoot joint amplitude was lower. These three joints also showed acceptable sensibility to experimental errors which supported their dominancy. This study concluded that to be reliable for assessments using skin markers, the foot and ankle complex could be divided into shank, hindfoot, medial forefoot, lateral forefoot and toes. Kinematics of foot models with more segments must be more cautiously used.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The motivation for this research initiated from the abrupt rise and fall of minicomputers which were initially used both for industrial automation and business applications due to their significantly lower cost than their predecessors, the mainframes. Later industrial automation developed its own vertically integrated hardware and software to address the application needs of uninterrupted operations, real-time control and resilience to harsh environmental conditions. This has led to the creation of an independent industry, namely industrial automation used in PLC, DCS, SCADA and robot control systems. This industry employs today over 200'000 people in a profitable slow clockspeed context in contrast to the two mainstream computing industries of information technology (IT) focused on business applications and telecommunications focused on communications networks and hand-held devices. Already in 1990s it was foreseen that IT and communication would merge into one Information and communication industry (ICT). The fundamental question of the thesis is: Could industrial automation leverage a common technology platform with the newly formed ICT industry? Computer systems dominated by complex instruction set computers (CISC) were challenged during 1990s with higher performance reduced instruction set computers (RISC). RISC started to evolve parallel to the constant advancement of Moore's law. These developments created the high performance and low energy consumption System-on-Chip architecture (SoC). Unlike to the CISC processors RISC processor architecture is a separate industry from the RISC chip manufacturing industry. It also has several hardware independent software platforms consisting of integrated operating system, development environment, user interface and application market which enables customers to have more choices due to hardware independent real time capable software applications. An architecture disruption merged and the smartphone and tablet market were formed with new rules and new key players in the ICT industry. Today there are more RISC computer systems running Linux (or other Unix variants) than any other computer system. The astonishing rise of SoC based technologies and related software platforms in smartphones created in unit terms the largest installed base ever seen in the history of computers and is now being further extended by tablets. An underlying additional element of this transition is the increasing role of open source technologies both in software and hardware. This has driven the microprocessor based personal computer industry with few dominating closed operating system platforms into a steep decline. A significant factor in this process has been the separation of processor architecture and processor chip production and operating systems and application development platforms merger into integrated software platforms with proprietary application markets. Furthermore the pay-by-click marketing has changed the way applications development is compensated: Three essays on major trends in a slow clockspeed industry: The case of industrial automation 2014 freeware, ad based or licensed - all at a lower price and used by a wider customer base than ever before. Moreover, the concept of software maintenance contract is very remote in the app world. However, as a slow clockspeed industry, industrial automation has remained intact during the disruptions based on SoC and related software platforms in the ICT industries. Industrial automation incumbents continue to supply systems based on vertically integrated systems consisting of proprietary software and proprietary mainly microprocessor based hardware. They enjoy admirable profitability levels on a very narrow customer base due to strong technology-enabled customer lock-in and customers' high risk leverage as their production is dependent on fault-free operation of the industrial automation systems. When will this balance of power be disrupted? The thesis suggests how industrial automation could join the mainstream ICT industry and create an information, communication and automation (ICAT) industry. Lately the Internet of Things (loT) and weightless networks, a new standard leveraging frequency channels earlier occupied by TV broadcasting, have gradually started to change the rigid world of Machine to Machine (M2M) interaction. It is foreseeable that enough momentum will be created that the industrial automation market will in due course face an architecture disruption empowered by these new trends. This thesis examines the current state of industrial automation subject to the competition between the incumbents firstly through a research on cost competitiveness efforts in captive outsourcing of engineering, research and development and secondly researching process re- engineering in the case of complex system global software support. Thirdly we investigate the industry actors', namely customers, incumbents and newcomers, views on the future direction of industrial automation and conclude with our assessments of the possible routes industrial automation could advance taking into account the looming rise of the Internet of Things (loT) and weightless networks. Industrial automation is an industry dominated by a handful of global players each of them focusing on maintaining their own proprietary solutions. The rise of de facto standards like IBM PC, Unix and Linux and SoC leveraged by IBM, Compaq, Dell, HP, ARM, Apple, Google, Samsung and others have created new markets of personal computers, smartphone and tablets and will eventually also impact industrial automation through game changing commoditization and related control point and business model changes. This trend will inevitably continue, but the transition to a commoditized industrial automation will not happen in the near future.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The book presents the state of the art in machine learning algorithms (artificial neural networks of different architectures, support vector machines, etc.) as applied to the classification and mapping of spatially distributed environmental data. Basic geostatistical algorithms are presented as well. New trends in machine learning and their application to spatial data are given, and real case studies based on environmental and pollution data are carried out. The book provides a CD-ROM with the Machine Learning Office software, including sample sets of data, that will allow both students and researchers to put the concepts rapidly to practice.

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

OBJECTIVE: The purpose of this study was to adapt and improve a minimally invasive two-step postmortem angiographic technique for use on human cadavers. Detailed mapping of the entire vascular system is almost impossible with conventional autopsy tools. The technique described should be valuable in the diagnosis of vascular abnormalities. MATERIALS AND METHODS: Postmortem perfusion with an oily liquid is established with a circulation machine. An oily contrast agent is introduced as a bolus injection, and radiographic imaging is performed. In this pilot study, the upper or lower extremities of four human cadavers were perfused. In two cases, the vascular system of a lower extremity was visualized with anterograde perfusion of the arteries. In the other two cases, in which the suspected cause of death was drug intoxication, the veins of an upper extremity were visualized with retrograde perfusion of the venous system. RESULTS: In each case, the vascular system was visualized up to the level of the small supplying and draining vessels. In three of the four cases, vascular abnormalities were found. In one instance, a venous injection mark engendered by the self-administration of drugs was rendered visible by exudation of the contrast agent. In the other two cases, occlusion of the arteries and veins was apparent. CONCLUSION: The method described is readily applicable to human cadavers. After establishment of postmortem perfusion with paraffin oil and injection of the oily contrast agent, the vascular system can be investigated in detail and vascular abnormalities rendered visible.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Introduction. Quantification of daily upper-limb activity is a key determinant in evaluation of shoulder surgery. For a number of shoulder diseases, problem in performing daily activities have been expressed in terms of upper-limb usage and non-usage. Many instruments measure upper-limb movement but do not focus on the differentiations between the use of left or right shoulder. Several methods have been used to measure it using only accelerometers, pressure sensors or video-based analysis. However, there is no standard or widely used objective measure for upper-limb movement. We report here on an objective method to measure the movement of upper-limb and we examined the use of 3D accelerometers and 3D gyroscopes for that purpose. Methods. We studied 8 subjects with unilateral pathological shoulder (8 rotator cuff disease: 53 years old ± 8) and compared them to 18 control subjects (10 right handed, 8 left handed: 32 years old ± 8, younger than the patient group to be almost sure they don_t have any unrecognized shoulder pathology). The Simple Shoulder Test (SST) and Disabilities of the Arm and Shoulder Score (DASH) questionnaires were completed by each subject. Two modules with 3 miniature capacitive gyroscopes and 3 miniature accelerometers were fixed by a patch on the dorsal side of the distal humerus, and one module with 3 gyroscopes and 3 accelerometers were fixed on the thorax. The subject wore the system during one day (8 hours), at home or wherever he/she went. We used a technique based on the 3D acceleration and the 3D angular velocities from the modules attached on the humerus. Results. As expected, we observed that for the stand and sit postures the right side is more used than the left side for a healthy right-handed person(idem on the left side for a healthy left-handed person). Subjects used their dominant upper-limb 18% more than the non-dominant upper-limb. The measurements on patients in daily life have shown that the patient has used more his non affected and non dominant side during daily activity if the dominant side = affected shoulder. If the dominant side affected shoulder, the difference can be showed only during walking period. Discussion-Conclusion. The technique developed and used allowed the quantification of the difference between dominant and non dominant side, affected and unaffected upper-limb activity. These results were encouraging for future evaluation of patients with shoulder injuries, before and after surgery. The feasibility and patient acceptability of the method using body fixed sensors for ambulatory evaluation of upper limbs kinematics was shown.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Résumé Cette thèse est consacrée à l'analyse, la modélisation et la visualisation de données environnementales à référence spatiale à l'aide d'algorithmes d'apprentissage automatique (Machine Learning). L'apprentissage automatique peut être considéré au sens large comme une sous-catégorie de l'intelligence artificielle qui concerne particulièrement le développement de techniques et d'algorithmes permettant à une machine d'apprendre à partir de données. Dans cette thèse, les algorithmes d'apprentissage automatique sont adaptés pour être appliqués à des données environnementales et à la prédiction spatiale. Pourquoi l'apprentissage automatique ? Parce que la majorité des algorithmes d'apprentissage automatiques sont universels, adaptatifs, non-linéaires, robustes et efficaces pour la modélisation. Ils peuvent résoudre des problèmes de classification, de régression et de modélisation de densité de probabilités dans des espaces à haute dimension, composés de variables informatives spatialisées (« géo-features ») en plus des coordonnées géographiques. De plus, ils sont idéaux pour être implémentés en tant qu'outils d'aide à la décision pour des questions environnementales allant de la reconnaissance de pattern à la modélisation et la prédiction en passant par la cartographie automatique. Leur efficacité est comparable au modèles géostatistiques dans l'espace des coordonnées géographiques, mais ils sont indispensables pour des données à hautes dimensions incluant des géo-features. Les algorithmes d'apprentissage automatique les plus importants et les plus populaires sont présentés théoriquement et implémentés sous forme de logiciels pour les sciences environnementales. Les principaux algorithmes décrits sont le Perceptron multicouches (MultiLayer Perceptron, MLP) - l'algorithme le plus connu dans l'intelligence artificielle, le réseau de neurones de régression généralisée (General Regression Neural Networks, GRNN), le réseau de neurones probabiliste (Probabilistic Neural Networks, PNN), les cartes auto-organisées (SelfOrganized Maps, SOM), les modèles à mixture Gaussiennes (Gaussian Mixture Models, GMM), les réseaux à fonctions de base radiales (Radial Basis Functions Networks, RBF) et les réseaux à mixture de densité (Mixture Density Networks, MDN). Cette gamme d'algorithmes permet de couvrir des tâches variées telle que la classification, la régression ou l'estimation de densité de probabilité. L'analyse exploratoire des données (Exploratory Data Analysis, EDA) est le premier pas de toute analyse de données. Dans cette thèse les concepts d'analyse exploratoire de données spatiales (Exploratory Spatial Data Analysis, ESDA) sont traités selon l'approche traditionnelle de la géostatistique avec la variographie expérimentale et selon les principes de l'apprentissage automatique. La variographie expérimentale, qui étudie les relations entre pairs de points, est un outil de base pour l'analyse géostatistique de corrélations spatiales anisotropiques qui permet de détecter la présence de patterns spatiaux descriptible par une statistique. L'approche de l'apprentissage automatique pour l'ESDA est présentée à travers l'application de la méthode des k plus proches voisins qui est très simple et possède d'excellentes qualités d'interprétation et de visualisation. Une part importante de la thèse traite de sujets d'actualité comme la cartographie automatique de données spatiales. Le réseau de neurones de régression généralisée est proposé pour résoudre cette tâche efficacement. Les performances du GRNN sont démontrées par des données de Comparaison d'Interpolation Spatiale (SIC) de 2004 pour lesquelles le GRNN bat significativement toutes les autres méthodes, particulièrement lors de situations d'urgence. La thèse est composée de quatre chapitres : théorie, applications, outils logiciels et des exemples guidés. Une partie importante du travail consiste en une collection de logiciels : Machine Learning Office. Cette collection de logiciels a été développée durant les 15 dernières années et a été utilisée pour l'enseignement de nombreux cours, dont des workshops internationaux en Chine, France, Italie, Irlande et Suisse ainsi que dans des projets de recherche fondamentaux et appliqués. Les cas d'études considérés couvrent un vaste spectre de problèmes géoenvironnementaux réels à basse et haute dimensionnalité, tels que la pollution de l'air, du sol et de l'eau par des produits radioactifs et des métaux lourds, la classification de types de sols et d'unités hydrogéologiques, la cartographie des incertitudes pour l'aide à la décision et l'estimation de risques naturels (glissements de terrain, avalanches). Des outils complémentaires pour l'analyse exploratoire des données et la visualisation ont également été développés en prenant soin de créer une interface conviviale et facile à l'utilisation. Machine Learning for geospatial data: algorithms, software tools and case studies Abstract The thesis is devoted to the analysis, modeling and visualisation of spatial environmental data using machine learning algorithms. In a broad sense machine learning can be considered as a subfield of artificial intelligence. It mainly concerns with the development of techniques and algorithms that allow computers to learn from data. In this thesis machine learning algorithms are adapted to learn from spatial environmental data and to make spatial predictions. Why machine learning? In few words most of machine learning algorithms are universal, adaptive, nonlinear, robust and efficient modeling tools. They can find solutions for the classification, regression, and probability density modeling problems in high-dimensional geo-feature spaces, composed of geographical space and additional relevant spatially referenced features. They are well-suited to be implemented as predictive engines in decision support systems, for the purposes of environmental data mining including pattern recognition, modeling and predictions as well as automatic data mapping. They have competitive efficiency to the geostatistical models in low dimensional geographical spaces but are indispensable in high-dimensional geo-feature spaces. The most important and popular machine learning algorithms and models interesting for geo- and environmental sciences are presented in details: from theoretical description of the concepts to the software implementation. The main algorithms and models considered are the following: multi-layer perceptron (a workhorse of machine learning), general regression neural networks, probabilistic neural networks, self-organising (Kohonen) maps, Gaussian mixture models, radial basis functions networks, mixture density networks. This set of models covers machine learning tasks such as classification, regression, and density estimation. Exploratory data analysis (EDA) is initial and very important part of data analysis. In this thesis the concepts of exploratory spatial data analysis (ESDA) is considered using both traditional geostatistical approach such as_experimental variography and machine learning. Experimental variography is a basic tool for geostatistical analysis of anisotropic spatial correlations which helps to understand the presence of spatial patterns, at least described by two-point statistics. A machine learning approach for ESDA is presented by applying the k-nearest neighbors (k-NN) method which is simple and has very good interpretation and visualization properties. Important part of the thesis deals with a hot topic of nowadays, namely, an automatic mapping of geospatial data. General regression neural networks (GRNN) is proposed as efficient model to solve this task. Performance of the GRNN model is demonstrated on Spatial Interpolation Comparison (SIC) 2004 data where GRNN model significantly outperformed all other approaches, especially in case of emergency conditions. The thesis consists of four chapters and has the following structure: theory, applications, software tools, and how-to-do-it examples. An important part of the work is a collection of software tools - Machine Learning Office. Machine Learning Office tools were developed during last 15 years and was used both for many teaching courses, including international workshops in China, France, Italy, Ireland, Switzerland and for realizing fundamental and applied research projects. Case studies considered cover wide spectrum of the real-life low and high-dimensional geo- and environmental problems, such as air, soil and water pollution by radionuclides and heavy metals, soil types and hydro-geological units classification, decision-oriented mapping with uncertainties, natural hazards (landslides, avalanches) assessments and susceptibility mapping. Complementary tools useful for the exploratory data analysis and visualisation were developed as well. The software is user friendly and easy to use.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The research aimed to evaluate machine traffic effect on soil compaction and the least limiting water range related to soybean cultivar yields, during two years, in a Haplustox soil. The six treatments were related to tractor (11 Mg weight) passes by the same place: T0, no compaction; and T1*, 1; T1, 1; T2, 2; T4, 4 and T6, 6. In the treatment T1*, the compaction occurred when soil was dried, in 2003/2004, and with a 4 Mg tractor in 2004/2005. Soybean yield was evaluated in relation to soil compaction during two agricultural years in completely randomized design (compaction levels); however, in the second year, there was a factorial scheme (compaction levels, with and without irrigation), with four replicates represented by 9 m² plots. In the first year, soybean [Glycine max (L.) Merr.] cultivar IAC Foscarim 31 was cultivated without irrigation; and in the second year, IAC Foscarim 31 and MG/BR 46 (Conquista) cultivars were cultivated with and without irrigation. Machine traffic causes compaction and reduces soybean yield for soil penetration resistance between 1.64 to 2.35 MPa, and bulk density between 1.50 to 1.53 Mg m-3. Soil bulk density from which soybean cultivar yields decrease is lower than the critical one reached at least limiting water range (LLWR =/ 0).