97 resultados para Shearing machines
Resumo:
Development and environmental issues of small cities in developing countries have largely been overlooked although these settlements are of global demographic importance and often face a "triple challenge"; that is, they have limited financial and human resources to address growing environmental problems that are related to both development (e.g., pollution) and under-development (e.g., inadequate water supply). Neoliberal policy has arguably aggravated this challenge as public investments in infrastructure generally declined while the focus shifted to the metropolitan "economic growth machines". This paper develops a conceptual framework and agenda for the study of small cities in the global south, their environmental dynamics, governance and politics in the current neoliberal context. While small cities are governed in a neoliberal policy context, they are not central to neoliberalism, and their (environmental) governance therefore seems to differ from that of global cities. Furthermore, "actually existing" neoliberal governance of small cities is shaped by the interplay of regional and local politics and environmental situations. The approach of urban political ecology and the concept of rural-urban linkages are used to consider these socio-ecological processes. The conceptual framework and research agenda are illustrated in the case of India, where the agency of small cities in regard to environmental governance seems to remain limited despite formal political decentralization.
Resumo:
We combined structural analysis, thermobarometry and oxygen isotope geochemistry to constrain the evolution of kyanite and/or andalusite-bearing quartz veins from the amphibolite facies metapelites of the Simano nappe, in the Central Alps of Switzerland. The Simano nappe records a complex polyphase tectonic evolution associated with nappe stacking during Tertiary Alpine collision (D1). The second regional deformation phase (132) is responsible for the main penetrative schistosity and mineral lineation, and formed during top-to-the-north thrusting. During the next stage of deformation (D3) the aluminosilicate-bearing veins formed by crystallization in tension gashes, in tectonic shadows of boudins, as well as along shear bands associated with top-to-the-north shearing. D2 and D3 are coeval with the Early Miocene metamorphic peak, characterised by kyanite + staurolite + garnet + biotite assemblages in metapelites. The peak pressure (P) and temperature (T) conditions recorded are constrained by multiple-equilibrium thermobarometry at 630 +/- 20 degrees C and 8.5 +/- 1 kbar (similar to 27 km depth), which is in agreement with oxygen isotope thermometry indicating isotopic equilibration of quartz-kyanite pairs at 670 +/- 50 degrees C. Quartz-kyanite pairs from the aluminosilicate-bearing quartz veins yield equilibration temperatures of 645 +/- 20 degrees C, confirming that the veins formed under conditions near metamorphic peak. Quartz and kyanite from veins and the surrounding metapelites have comparable isotopic compositions. Local intergranular diffusion in the border of the veins controls the mass-transfer and the growth of the product assemblage, inducing local mobilization of SiO2 and Al2O3. Andalusite is absent from the host rocks, but it is common in quartz veins, where it often pseudomorphs kyanite. For andalusite to be stable at T-max, the pressure in the veins must have been substantially lower than lithostatic. An alternative explanation consistent with structural observations would be inheritance by andalusite of the kyanite isotopic signature during polymorphic transformation after the metamorphic peak.
Resumo:
The research considers the problem of spatial data classification using machine learning algorithms: probabilistic neural networks (PNN) and support vector machines (SVM). As a benchmark model simple k-nearest neighbor algorithm is considered. PNN is a neural network reformulation of well known nonparametric principles of probability density modeling using kernel density estimator and Bayesian optimal or maximum a posteriori decision rules. PNN is well suited to problems where not only predictions but also quantification of accuracy and integration of prior information are necessary. An important property of PNN is that they can be easily used in decision support systems dealing with problems of automatic classification. Support vector machine is an implementation of the principles of statistical learning theory for the classification tasks. Recently they were successfully applied for different environmental topics: classification of soil types and hydro-geological units, optimization of monitoring networks, susceptibility mapping of natural hazards. In the present paper both simulated and real data case studies (low and high dimensional) are considered. The main attention is paid to the detection and learning of spatial patterns by the algorithms applied.
Resumo:
Machine learning has been largely applied to analyze data in various domains, but it is still new to personalized medicine, especially dose individualization. In this paper, we focus on the prediction of drug concentrations using Support Vector Machines (S VM) and the analysis of the influence of each feature to the prediction results. Our study shows that SVM-based approaches achieve similar prediction results compared with pharmacokinetic model. The two proposed example-based SVM methods demonstrate that the individual features help to increase the accuracy in the predictions of drug concentration with a reduced library of training data.
Resumo:
This article presents an experimental study about the classification ability of several classifiers for multi-classclassification of cannabis seedlings. As the cultivation of drug type cannabis is forbidden in Switzerland lawenforcement authorities regularly ask forensic laboratories to determinate the chemotype of a seized cannabisplant and then to conclude if the plantation is legal or not. This classification is mainly performed when theplant is mature as required by the EU official protocol and then the classification of cannabis seedlings is a timeconsuming and costly procedure. A previous study made by the authors has investigated this problematic [1]and showed that it is possible to differentiate between drug type (illegal) and fibre type (legal) cannabis at anearly stage of growth using gas chromatography interfaced with mass spectrometry (GC-MS) based on therelative proportions of eight major leaf compounds. The aims of the present work are on one hand to continueformer work and to optimize the methodology for the discrimination of drug- and fibre type cannabisdeveloped in the previous study and on the other hand to investigate the possibility to predict illegal cannabisvarieties. Seven classifiers for differentiating between cannabis seedlings are evaluated in this paper, namelyLinear Discriminant Analysis (LDA), Partial Least Squares Discriminant Analysis (PLS-DA), Nearest NeighbourClassification (NNC), Learning Vector Quantization (LVQ), Radial Basis Function Support Vector Machines(RBF SVMs), Random Forest (RF) and Artificial Neural Networks (ANN). The performance of each method wasassessed using the same analytical dataset that consists of 861 samples split into drug- and fibre type cannabiswith drug type cannabis being made up of 12 varieties (i.e. 12 classes). The results show that linear classifiersare not able to manage the distribution of classes in which some overlap areas exist for both classificationproblems. Unlike linear classifiers, NNC and RBF SVMs best differentiate cannabis samples both for 2-class and12-class classifications with average classification results up to 99% and 98%, respectively. Furthermore, RBFSVMs correctly classified into drug type cannabis the independent validation set, which consists of cannabisplants coming from police seizures. In forensic case work this study shows that the discrimination betweencannabis samples at an early stage of growth is possible with fairly high classification performance fordiscriminating between cannabis chemotypes or between drug type cannabis varieties.
Resumo:
THESIS ABSTRACTThis thesis project was aimed at studying the molecular mechanisms underlying learning and memory formation, in particular as they relate to the metabolic coupling between astrocytes and neurons. For that, changes in the metabolic activity of different mice brain regions after 1 or 9 days of training in an eight-arm radial maze were assessed by (14C) 2-deoxyglucose (2DG) autoradiography. Significant differences in the areas engaged during the behavioral task at day 1 (when animals are confronted for the first time to the learning task) and at day 9 (when animals are highly performing) have been identified. These areas include the hippocampus, the fornix, the parietal cortex, the laterodorsal thalamic nucleus and the mammillary bodies at day 1 ; and the anterior cingulate, the retrosplenial cortex and the dorsal striatum at day 9. Two of these cerebral regions (those presenting the greatest changes at day 1 and day 9: the hippocampus and the retrosplenial cortex, respectively) were microdissected by laser capture microscopy and selected genes related to neuron-glia metabolic coupling, glucose metabolism and synaptic plasticity were analyzed by RT-PCR. 2DG and gene expression analysis were performed at three different times: 1) immediately after the end of the behavioral paradigm, 2) 45 minutes and 3) 6 hours after training. The main goal of this study was the identification of the metabolic adaptations following the learning task. Gene expression results demonstrate that the learning task profoundly modulates the pattern of gene expression in time, meaning that these two cerebral regions with high 2DG signal (hippocampus and retrosplenial cortex) have adapted their metabolic molecular machinery in consequence. Almost all studied genes show a higher expression in the hippocampus at day 1 compared to day 9, while an increased expression was found in the retrosplenial cortex at day 9. We can observe these molecular adaptations with a short delay of 45 minutes after the end of the task. However, 6 hours after training a high gene expression was found at day 9 (compared to day 1) in both regions, suggesting that only one day of training is not sufficient to detect transcriptional modifications several hours after the task. Thus, gene expression data match 2DG results indicating a transfer of information in time (from day 1 to day 9) and in space (from the hippocampus to the retrosplenial cortex), and this at a cellular and a molecular level. Moreover, learning seems to modify the neuron-glia metabolic coupling, since several genes involved in this coupling are induced. These results also suggest a role of glia in neuronal plasticity.RESUME DU TRAVAIL DE THESECe projet de thèse a eu pour but l'étude des mécanismes moléculaires qui sont impliqués dans l'apprentissage et la mémoire et, en particulier, à les mettre en rapport avec le couplage métabolique existant entre les astrocytes et les neurones. Pour cela, des changements de l'activité métabolique dans différentes régions du cerveau des souris après 1 ou 9 jours d'entraînement dans un labyrinthe radial à huit-bras ont été évalués par autoradiographie au 2-désoxyglucose (2DG). Des différences significatives dans les régions engagées pendant la tâche comportementale au jour 1 (quand les animaux sont confrontés pour la première fois à la tâche) et au jour 9 (quand les animaux ont déjà appris) ont été identifiés. Ces régions incluent, au jour 1, l'hippocampe, le fornix, le cortex pariétal, le noyau thalamic laterodorsal et les corps mamillaires; et, au jour 9, le cingulaire antérieur, le cortex retrosplenial et le striatum dorsal. Deux de ces régions cérébrales (celles présentant les plus grands changements à jour 1 et à jour 9: l'hippocampe et le cortex retrosplenial, respectivement) ont été découpées par microdissection au laser et quelques gènes liés au couplage métabolique neurone-glie, au métabolisme du glucose et à la plasticité synaptique ont été analysées par RT-PCR. L'étude 2DG et l'analyse de l'expression de gènes ont été exécutés à trois temps différents: 1) juste après entraînement, 2) 45 minutes et 3) 6 heures après la fin de la tâche. L'objectif principal de cette étude était l'identification des adaptations métaboliques suivant la tâche d'apprentissage. Les résultats de l'expression de gènes démontrent que la tâche d'apprentissage module profondément le profile d'expression des gènes dans le temps, signifiant que ces deux régions cérébrales avec un signal 2DG élevé (l'hippocampe et le cortex retrosplenial) ont adapté leurs « machines moléculaires » en conséquence. Presque tous les gènes étudiés montrent une expression plus élevée dans l'hippocampe au jour 1 comparé au jour 9, alors qu'une expression accrue a été trouvée dans le cortex retrosplenial au jour 9. Nous pouvons observer ces adaptations moléculaires avec un retard court de 45 minutes après la fin de la tâche. Cependant, 6 heures après l'entraînement, une expression de gènes élevée a été trouvée au jour 9 (comparé à jour 1) dans les deux régions, suggérant que seulement un jour d'entraînement ne suffit pas pour détecter des modifications transcriptionelles plusieurs heures après la tâche. Ainsi, les données d'expression de gènes corroborent les résultats 2DG indiquant un transfert d'information dans le temps (de jour 1 à jour 9) et dans l'espace (de l'hippocampe au cortex retrosplenial), et ceci à un niveau cellulaire et moléculaire. D'ailleurs, la tâche d'apprentissage semble modifier le couplage métabolique neurone-glie, puisque de nombreux gènes impliqués dans ce couplage sont induits. Ces observations suggèrent un rôle important de la glie dans les mécanismes de plasticité du système nerveux.
Resumo:
Because of the increase in workplace automation and the diversification of industrial processes, workplaces have become more and more complex. The classical approaches used to address workplace hazard concerns, such as checklists or sequence models, are, therefore, of limited use in such complex systems. Moreover, because of the multifaceted nature of workplaces, the use of single-oriented methods, such as AEA (man oriented), FMEA (system oriented), or HAZOP (process oriented), is not satisfactory. The use of a dynamic modeling approach in order to allow multiple-oriented analyses may constitute an alternative to overcome this limitation. The qualitative modeling aspects of the MORM (man-machine occupational risk modeling) model are discussed in this article. The model, realized on an object-oriented Petri net tool (CO-OPN), has been developed to simulate and analyze industrial processes in an OH&S perspective. The industrial process is modeled as a set of interconnected subnets (state spaces), which describe its constitutive machines. Process-related factors are introduced, in an explicit way, through machine interconnections and flow properties. While man-machine interactions are modeled as triggering events for the state spaces of the machines, the CREAM cognitive behavior model is used in order to establish the relevant triggering events. In the CO-OPN formalism, the model is expressed as a set of interconnected CO-OPN objects defined over data types expressing the measure attached to the flow of entities transiting through the machines. Constraints on the measures assigned to these entities are used to determine the state changes in each machine. Interconnecting machines implies the composition of such flow and consequently the interconnection of the measure constraints. This is reflected by the construction of constraint enrichment hierarchies, which can be used for simulation and analysis optimization in a clear mathematical framework. The use of Petri nets to perform multiple-oriented analysis opens perspectives in the field of industrial risk management. It may significantly reduce the duration of the assessment process. But, most of all, it opens perspectives in the field of risk comparisons and integrated risk management. Moreover, because of the generic nature of the model and tool used, the same concepts and patterns may be used to model a wide range of systems and application fields.
Resumo:
MOTIVATION: Analysis of millions of pyro-sequences is currently playing a crucial role in the advance of environmental microbiology. Taxonomy-independent, i.e. unsupervised, clustering of these sequences is essential for the definition of Operational Taxonomic Units. For this application, reproducibility and robustness should be the most sought after qualities, but have thus far largely been overlooked. RESULTS: More than 1 million hyper-variable internal transcribed spacer 1 (ITS1) sequences of fungal origin have been analyzed. The ITS1 sequences were first properly extracted from 454 reads using generalized profiles. Then, otupipe, cd-hit-454, ESPRIT-Tree and DBC454, a new algorithm presented here, were used to analyze the sequences. A numerical assay was developed to measure the reproducibility and robustness of these algorithms. DBC454 was the most robust, closely followed by ESPRIT-Tree. DBC454 features density-based hierarchical clustering, which complements the other methods by providing insights into the structure of the data. AVAILABILITY: An executable is freely available for non-commercial users at ftp://ftp.vital-it.ch/tools/dbc454. It is designed to run under MPI on a cluster of 64-bit Linux machines running Red Hat 4.x, or on a multi-core OSX system. CONTACT: dbc454@vital-it.ch or nicolas.guex@isb-sib.ch.
Resumo:
La présence de fluide météorique synchrone à l'activité du détachement (Farmin, 2003 ; Mulch et al., 2007 ; Gébelin et al., 2011), implique que les zones de cisaillement sont des systèmes ouverts avec des cellules de convections à l'échelle crustale et un intense gradient géothermique au sein du détachement (Morrison et Anderson, 1998, Gottardi et al., 2011). De plus, les réactions métamorphiques liées à des infiltrations fluides dans les zones de cisaillement extensionnel peuvent influencer les paramètres rhéologiques du système (White and Knipe, 1978), et impliquer la localisation de la déformation dans la croûte. Dans ce manuscrit, deux zones de cisaillement infiltrées par des fluides météoriques sont étudiées, l'une étant largement quartzitique, et l'autre de nature granitique ; les relations entre déformation, fluides, et roches s'appuient sur des approches structurales, microstructurales, chimiques et isotopiques. L'étude du détachement du Columbia river (WA, USA) met en évidence que la déformation mylonitique se développe en un million d'années. La phase de cisaillement principal s'effectue à 365± 30°C d'après les compositions isotopiques en oxygène du quartz et de la muscovite. Ces minéraux atteignent l'équilibre isotopique lors de leur recristallisation dynamique contemporaine à la déformation. La zone de cisaillement enregistre une baisse de température, remplaçant le mécanisme de glissement par dislocation par celui de dissolution- précipitation dans les derniers stades de l'activité du détachement. La dynamique de circulation fluide bascule d'une circulation pervasive à chenalisée, ce qui engendre localement la rupture des équilibres d'échange isotopiques. La zone de cisaillement de Bitterroot (MT, USA) présente une zone mylonitique de 600m d'épaisseur, progressant des protomylonites aux ultramylonites. L'intensité de la localisation de la déformation se reflète directement sur l'hydratation des feldspaths, réaction métamorphique majeure dite de « rock softening ». Une étude sur roche totale indique des transferts de masse latéraux au sein des mylonites, et d'importantes pertes de volume dans les ultramylonites. La composition isotopique en hydrogène des phyllosilicates met en évidence la présence (1) d'une source magmatique/métamorphique originelle, caractérisée par les granodiorites ayant conservé leur foliation magmatique, jusqu'aux protomylonites, et (2) une source météorique qui tamponne les valeurs des phyllosilicates des fabriques mylonitiques jusqu'aux veines de quartz non-déformées. Les compositions isotopiques en oxygène des minéraux illustrent le tamponnement de la composition du fluide météorique par l'encaissant. Ce phénomène cesse lors du processus de chloritisation de la biotite, puisque les valeurs des chlorites sont extrêmement négatives (-10 per mil). La thermométrie isotopique indique une température d'équilibre isotopique de la granodiorite entre 600-500°C, entre 500-300°C dans les mylonites, et entre 300 et 200°C dans les fabriques cassantes (cataclasites et veines de quartz). Basé sur les résultats issus de ce travail, nous proposons un modèle général d'interactions fluide-roches-déformation dans les zones de détachements infiltrées par des fluides météoriques. Les zones de détachements évoluent rapidement (en quelques millions d'années) au travers de la transition fragile-ductile ; celle-ci étant partiellement contrôlée par l'effet thermique des circulations de fluide météoriques. Les systèmes de détachements sont des lieux où la déformation et les circulations fluides sont couplées ; évoluant rapidement vers une localisation de la déformation, et de ce fait, une exhumation efficace. - The presence of meteoric fluids synchronous with the activity of extensional detachment zones (Famin, 2004; Mulch et al., 2007; Gébelin et al., 2011) implies that extensional systems involve fluid convection at a crustal scale, which results in high geothermal gradients within active detachment zones (Morrison and Anderson, 1998, Gottardi et al., 2011). In addition, the metamorphic reactions related to fluid infiltration in extensional shear zones can influence the rheology of the system (White and Knipe, 1978) and ultimately how strain localizes in the crust. In this thesis, two shear zones that were permeated by meteoric fluids are studied, one quartzite-dominated, and the other of granitic composition; the relations between strain, fluid, and evolving rock composition are addressed using structural, microstructural, and chemical/isotopic measurements. The study of the Columbia River detachment that bounds the Kettle core complex (Washington, USA) demonstrates that the mylonitic fabrics in the 100 m thick quartzite- dominated detachment footwall developed within one million years. The main shearing stage occurred at 365 ± 30°C when oxygen isotopes of quartz and muscovite equilibrated owing to coeval deformation and dynamic recrystallization of these minerals. The detachment shear zone records a decrease in temperature, and dislocation creep during detachment shearing gave way to dissolution-precipitation and fracturing in the later stages of detachment activity. Fluid flow switched from pervasive to channelized, leading to isotopic disequilibrium between different minerals. The Bitterroot shear zone detachment (Montana, USA) developed a 600 m thick mylonite zone, with well-developed transitions from protomylonite to ultramylonite. The localization of deformation relates directly to the intensity of feldspar hydration, a major rock- softening metamorphic reaction. Bulk-rock analyses of the mylonitic series indicate lateral mass transfer in the mylonite (no volume change), and significant volume loss in ultramylonite. The hydrogen isotope composition of phyllosilicates shows (1) the presence of an initial magmatic/metamorphic source characterized by the granodiorite in which a magmatic, and gneissic (protomylonite) foliation developed, and (2) a meteoric source that buffers the values of phyllosilicates in mylonite, ultramylonite, cataclasite, and deformed and undeformed quartz veins. The mineral oxygen isotope compositions were buffered by the host-rock compositions until chloritization of biotite started; the chlorite oxygen isotope values are negative (-10 per mil). Isotope thermometry indicates a temperature of isotopic equilibrium of the granodiorite between 600-500°C, between 500-300°C in the mylonite, and between 300 and 200°C for brittle fabrics (cataclasite and quartz veins). Results from this work suggest a general model for fluid-rock-strain feedbacks in detachment systems that are permeated by meteoric fluids. Phyllosilicates have preserved in their hydrogen isotope values evidence for the interaction between rock and meteoric fluids during mylonite development. Fluid flow generates mass transfer along the tectonic anisotropy, and mylonites do not undergo significant volume change, except locally in ultramylonite zones. Hydration of detachment shear zones attends mechanical grain size reduction and enhances strain softening and localization. Self-exhuming detachment shear zones evolve rapidly (a few million years) through the transition from ductile to brittle, which is partly controlled by the thermal effect of circulating surface fluids. Detachment systems are zones in the crust where strain and fluid flow are coupled; these systems. evolve rapidly toward strain localization and therefore efficient exhumation.
Resumo:
A multiwell plate bioassay was developed using genetically modified bacteria (bioreporter cells) to detect inorganic arsenic extracted from rice. The bacterial cells expressed luciferase upon exposure to arsenite, the activity of which was detected by measurement of cellular bioluminescence. The bioreporter cells detected arsenic in all rice varieties tested, with averages of 0.02-0.15 microg of arsenite equivalent per gram of dry weight and a method detection limit of 6 ng of arsenite per gram of dry rice. This amounted to between approximately 20 and 90% of the total As content reported by chemical methods for the same sample and suggested that a major proportion of arsenic in rice is in the inorganic form. Calibrations of the bioassay with pure inorganic and organic arsenic forms showed that the bacterial cells react to arsenite with highest affinity, followed by arsenate (with 25% response relative to an equivalent arsenite concentration) and trimethylarsine oxide (at 10% relative response). A method for biocompatible arsenic extraction was elaborated, which most optimally consisted of (i) grinding rice to powder, (ii) mixing with an aqueous solution containing pancreatic enzymes, (iii) mechanical shearing, (iv) extraction in mild acid conditions and moderate heat, and (v) centrifugation and pH neutralization. Detection of mainly inorganic arsenic by the bacterial cells may have important advantages for toxicity assessment of rice consumption and would form a good complement to total chemical arsenic determination.
Resumo:
Avalanche forecasting is a complex process involving the assimilation of multiple data sources to make predictions over varying spatial and temporal resolutions. Numerically assisted forecasting often uses nearest neighbour methods (NN), which are known to have limitations when dealing with high dimensional data. We apply Support Vector Machines to a dataset from Lochaber, Scotland to assess their applicability in avalanche forecasting. Support Vector Machines (SVMs) belong to a family of theoretically based techniques from machine learning and are designed to deal with high dimensional data. Initial experiments showed that SVMs gave results which were comparable with NN for categorical and probabilistic forecasts. Experiments utilising the ability of SVMs to deal with high dimensionality in producing a spatial forecast show promise, but require further work.
Resumo:
The present research deals with an important public health threat, which is the pollution created by radon gas accumulation inside dwellings. The spatial modeling of indoor radon in Switzerland is particularly complex and challenging because of many influencing factors that should be taken into account. Indoor radon data analysis must be addressed from both a statistical and a spatial point of view. As a multivariate process, it was important at first to define the influence of each factor. In particular, it was important to define the influence of geology as being closely associated to indoor radon. This association was indeed observed for the Swiss data but not probed to be the sole determinant for the spatial modeling. The statistical analysis of data, both at univariate and multivariate level, was followed by an exploratory spatial analysis. Many tools proposed in the literature were tested and adapted, including fractality, declustering and moving windows methods. The use of Quan-tité Morisita Index (QMI) as a procedure to evaluate data clustering in function of the radon level was proposed. The existing methods of declustering were revised and applied in an attempt to approach the global histogram parameters. The exploratory phase comes along with the definition of multiple scales of interest for indoor radon mapping in Switzerland. The analysis was done with a top-to-down resolution approach, from regional to local lev¬els in order to find the appropriate scales for modeling. In this sense, data partition was optimized in order to cope with stationary conditions of geostatistical models. Common methods of spatial modeling such as Κ Nearest Neighbors (KNN), variography and General Regression Neural Networks (GRNN) were proposed as exploratory tools. In the following section, different spatial interpolation methods were applied for a par-ticular dataset. A bottom to top method complexity approach was adopted and the results were analyzed together in order to find common definitions of continuity and neighborhood parameters. Additionally, a data filter based on cross-validation was tested with the purpose of reducing noise at local scale (the CVMF). At the end of the chapter, a series of test for data consistency and methods robustness were performed. This lead to conclude about the importance of data splitting and the limitation of generalization methods for reproducing statistical distributions. The last section was dedicated to modeling methods with probabilistic interpretations. Data transformation and simulations thus allowed the use of multigaussian models and helped take the indoor radon pollution data uncertainty into consideration. The catego-rization transform was presented as a solution for extreme values modeling through clas-sification. Simulation scenarios were proposed, including an alternative proposal for the reproduction of the global histogram based on the sampling domain. The sequential Gaussian simulation (SGS) was presented as the method giving the most complete information, while classification performed in a more robust way. An error measure was defined in relation to the decision function for data classification hardening. Within the classification methods, probabilistic neural networks (PNN) show to be better adapted for modeling of high threshold categorization and for automation. Support vector machines (SVM) on the contrary performed well under balanced category conditions. In general, it was concluded that a particular prediction or estimation method is not better under all conditions of scale and neighborhood definitions. Simulations should be the basis, while other methods can provide complementary information to accomplish an efficient indoor radon decision making.
Resumo:
Although cross-sectional diffusion tensor imaging (DTI) studies revealed significant white matter changes in mild cognitive impairment (MCI), the utility of this technique in predicting further cognitive decline is debated. Thirty-five healthy controls (HC) and 67 MCI subjects with DTI baseline data were neuropsychologically assessed at one year. Among them, there were 40 stable (sMCI; 9 single domain amnestic, 7 single domain frontal, 24 multiple domain) and 27 were progressive (pMCI; 7 single domain amnestic, 4 single domain frontal, 16 multiple domain). Fractional anisotropy (FA) and longitudinal, radial, and mean diffusivity were measured using Tract-Based Spatial Statistics. Statistics included group comparisons and individual classification of MCI cases using support vector machines (SVM). FA was significantly higher in HC compared to MCI in a distributed network including the ventral part of the corpus callosum, right temporal and frontal pathways. There were no significant group-level differences between sMCI versus pMCI or between MCI subtypes after correction for multiple comparisons. However, SVM analysis allowed for an individual classification with accuracies up to 91.4% (HC versus MCI) and 98.4% (sMCI versus pMCI). When considering the MCI subgroups separately, the minimum SVM classification accuracy for stable versus progressive cognitive decline was 97.5% in the multiple domain MCI group. SVM analysis of DTI data provided highly accurate individual classification of stable versus progressive MCI regardless of MCI subtype, indicating that this method may become an easily applicable tool for early individual detection of MCI subjects evolving to dementia.
Resumo:
OBJECTIVE: To explore the user-friendliness and ergonomics of seven new generation intensive care ventilators. DESIGN: Prospective task-performing study. SETTING: Intensive care research laboratory, university hospital. METHODS: Ten physicians experienced in mechanical ventilation, but without prior knowledge of the ventilators, were asked to perform eight specific tasks [turning the ventilator on; recognizing mode and parameters; recognizing and setting alarms; mode change; finding and activating the pre-oxygenation function; pressure support setting; stand-by; finding and activating non-invasive ventilation (NIV) mode]. The time needed for each task was compared to a reference time (by trained physiotherapist familiar with the devices). A time >180 s was considered a task failure. RESULTS: For each of the tests on the ventilators, all physicians' times were significantly higher than the reference time (P < 0.001). A mean of 13 +/- 8 task failures (16%) was observed by the ventilator. The most frequently failed tasks were mode and parameter recognition, starting pressure support and finding the NIV mode. Least often failed tasks were turning on the pre-oxygenation function and alarm recognition and management. Overall, there was substantial heterogeneity between machines, some exhibiting better user-friendliness than others for certain tasks, but no ventilator was clearly better that the others on all points tested. CONCLUSIONS: The present study adds to the available literature outlining the ergonomic shortcomings of mechanical ventilators. These results suggest that closer ties between end-users and manufacturers should be promoted, at an early development phase of these machines, based on the scientific evaluation of the cognitive processes involved by users in the clinical setting.