966 resultados para Respiration, Artificial [methods]


Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the analysis of medical images for computer-aided diagnosis and therapy, segmentation is often required as a preliminary step. Medical image segmentation is a complex and challenging task due to the complex nature of the images. The brain has a particularly complicated structure and its precise segmentation is very important for detecting tumors, edema, and necrotic tissues in order to prescribe appropriate therapy. Magnetic Resonance Imaging is an important diagnostic imaging technique utilized for early detection of abnormal changes in tissues and organs. It possesses good contrast resolution for different tissues and is, thus, preferred over Computerized Tomography for brain study. Therefore, the majority of research in medical image segmentation concerns MR images. As the core juncture of this research a set of MR images have been segmented using standard image segmentation techniques to isolate a brain tumor from the other regions of the brain. Subsequently the resultant images from the different segmentation techniques were compared with each other and analyzed by professional radiologists to find the segmentation technique which is the most accurate. Experimental results show that the Otsu’s thresholding method is the most suitable image segmentation method to segment a brain tumor from a Magnetic Resonance Image.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Binary classification methods can be generalized in many ways to handle multiple classes. It turns out that not all generalizations preserve the nice property of Bayes consistency. We provide a necessary and sufficient condition for consistency which applies to a large class of multiclass classification methods. The approach is illustrated by applying it to some multiclass methods proposed in the literature.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Binary classification is a well studied special case of the classification problem. Statistical properties of binary classifiers, such as consistency, have been investigated in a variety of settings. Binary classification methods can be generalized in many ways to handle multiple classes. It turns out that one can lose consistency in generalizing a binary classification method to deal with multiple classes. We study a rich family of multiclass methods and provide a necessary and sufficient condition for their consistency. We illustrate our approach by applying it to some multiclass methods proposed in the literature.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Damage detection in structures has become increasingly important in recent years. While a number of damage detection and localization methods have been proposed, few attempts have been made to explore the structure damage with frequency response functions (FRFs). This paper illustrates the damage identification and condition assessment of a beam structure using a new frequency response functions (FRFs) based damage index and Artificial Neural Networks (ANNs). In practice, usage of all available FRF data as an input to artificial neural networks makes the training and convergence impossible. Therefore one of the data reduction techniques Principal Component Analysis (PCA) is introduced in the algorithm. In the proposed procedure, a large set of FRFs are divided into sub-sets in order to find the damage indices for different frequency points of different damage scenarios. The basic idea of this method is to establish features of damaged structure using FRFs from different measurement points of different sub-sets of intact structure. Then using these features, damage indices of different damage cases of the structure are identified after reconstructing of available FRF data using PCA. The obtained damage indices corresponding to different damage locations and severities are introduced as input variable to developed artificial neural networks. Finally, the effectiveness of the proposed method is illustrated and validated by using the finite element modal of a beam structure. The illustrated results show that the PCA based damage index is suitable and effective for structural damage detection and condition assessment of building structures.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we seek to expand the use of direct methods in real-time applications by proposing a vision-based strategy for pose estimation of aerial vehicles. The vast majority of approaches make use of features to estimate motion. Conversely, the strategy we propose is based on a MR (Multi- Resolution) implementation of an image registration technique (Inverse Compositional Image Alignment ICIA) using direct methods. An on-board camera in a downwards-looking configuration, and the assumption of planar scenes, are the bases of the algorithm. The motion between frames (rotation and translation) is recovered by decomposing the frame-to-frame homography obtained by the ICIA algorithm applied to a patch that covers around the 80% of the image. When the visual estimation is required (e.g. GPS drop-out), this motion is integrated with the previous known estimation of the vehiclesstate, obtained from the on-board sensors (GPS/IMU), and the subsequent estimations are based only on the vision-based motion estimations. The proposed strategy is tested with real flight data in representative stages of a flight: cruise, landing, and take-off, being two of those stages considered critical: take-off and landing. The performance of the pose estimation strategy is analyzed by comparing it with the GPS/IMU estimations. Results show correlation between the visual estimation obtained with the MR-ICIA and the GPS/IMU data, that demonstrate that the visual estimation can be used to provide a good approximation of the vehicles state when it is required (e.g. GPS drop-outs). In terms of performance, the proposed strategy is able to maintain an estimation of the vehicles state for more than one minute, at real-time frame rates based, only on visual information.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Previous studies have demonstrated that pattern recognition approaches to accelerometer data reduction are feasible and moderately accurate in classifying activity type in children. Whether pattern recognition techniques can be used to provide valid estimates of physical activity (PA) energy expenditure in youth remains unexplored in the research literature. Purpose: The objective of this study is to develop and test artificial neural networks (ANNs) to predict PA type and energy expenditure (PAEE) from processed accelerometer data collected in children and adolescents. Methods: One hundred participants between the ages of 5 and 15 yr completed 12 activity trials that were categorized into five PA types: sedentary, walking, running, light-intensity household activities or games, and moderate-to-vigorous intensity games or sports. During each trial, participants wore an ActiGraph GTIM on the right hip, and (V) Over dotO(2) was measured using the Oxycon Mobile (Viasys Healthcare, Yorba Linda, CA) portable metabolic system. ANNs to predict PA type and PAEE (METs) were developed using the following features: 10th, 25th, 50th, 75th, and 90th percentiles and the lag one autocorrelation. To determine the highest time resolution achievable, we extracted features from 10-, 15-, 20-, 30-, and 60-s windows. Accuracy was assessed by calculating the percentage of windows correctly classified and root mean square en-or (RMSE). Results: As window size increased from 10 to 60 s, accuracy for the PA-type ANN increased from 81.3% to 88.4%. RMSE for the MET prediction ANN decreased from 1.1 METs to 0.9 METs. At any given window size, RMSE values for the MET prediction ANN were 30-40% lower than the conventional regression-based approaches. Conclusions: ANNs can be used to predict both PA type and PAEE in children and adolescents using count data from a single waist mounted accelerometer.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis develops a novel approach to robot control that learns to account for a robot's dynamic complexities while executing various control tasks using inspiration from biological sensorimotor control and machine learning. A robot that can learn its own control system can account for complex situations and adapt to changes in control conditions to maximise its performance and reliability in the real world. This research has developed two novel learning methods, with the aim of solving issues with learning control of non-rigid robots that incorporate additional dynamic complexities. The new learning control system was evaluated on a real three degree-of-freedom elastic joint robot arm with a number of experiments: initially validating the learning method and testing its ability to generalise to new tasks, then evaluating the system during a learning control task requiring continuous online model adaptation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nowadays, demand for automated Gas metal arc welding (GMAW) is growing and consequently need for intelligent systems is increased to ensure the accuracy of the procedure. To date, welding pool geometry has been the most used factor in quality assessment of intelligent welding systems. But, it has recently been found that Mahalanobis Distance (MD) not only can be used for this purpose but also is more efficient. In the present paper, Artificial Neural Networks (ANN) has been used for prediction of MD parameter. However, advantages and disadvantages of other methods have been discussed. The Levenberg–Marquardt algorithm was found to be the most effective algorithm for GMAW process. It is known that the number of neurons plays an important role in optimal network design. In this work, using trial and error method, it has been found that 30 is the optimal number of neurons. The model has been investigated with different number of layers in Multilayer Perceptron (MLP) architecture and has been shown that for the aim of this work the optimal result is obtained when using MLP with one layer. Robustness of the system has been evaluated by adding noise into the input data and studying the effect of the noise in prediction capability of the network. The experiments for this study were conducted in an automated GMAW setup that was integrated with data acquisition system and prepared in a laboratory for welding of steel plate with 12 mm in thickness. The accuracy of the network was evaluated by Root Mean Squared (RMS) error between the measured and the estimated values. The low error value (about 0.008) reflects the good accuracy of the model. Also the comparison of the predicted results by ANN and the test data set showed very good agreement that reveals the predictive power of the model. Therefore, the ANN model offered in here for GMA welding process can be used effectively for prediction goals.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis presents an interdisciplinary analysis of how models and simulations function in the production of scientific knowledge. The work is informed by three scholarly traditions: studies on models and simulations in philosophy of science, so-called micro-sociological laboratory studies within science and technology studies, and cultural-historical activity theory. Methodologically, I adopt a naturalist epistemology and combine philosophical analysis with a qualitative, empirical case study of infectious-disease modelling. This study has a dual perspective throughout the analysis: it specifies the modelling practices and examines the models as objects of research. The research questions addressed in this study are: 1) How are models constructed and what functions do they have in the production of scientific knowledge? 2) What is interdisciplinarity in model construction? 3) How do models become a general research tool and why is this process problematic? The core argument is that the mediating models as investigative instruments (cf. Morgan and Morrison 1999) take questions as a starting point, and hence their construction is intentionally guided. This argument applies the interrogative model of inquiry (e.g., Sintonen 2005; Hintikka 1981), which conceives of all knowledge acquisition as process of seeking answers to questions. The first question addresses simulation models as Artificial Nature, which is manipulated in order to answer questions that initiated the model building. This account develops further the "epistemology of simulation" (cf. Winsberg 2003) by showing the interrelatedness of researchers and their objects in the process of modelling. The second question clarifies why interdisciplinary research collaboration is demanding and difficult to maintain. The nature of the impediments to disciplinary interaction are examined by introducing the idea of object-oriented interdisciplinarity, which provides an analytical framework to study the changes in the degree of interdisciplinarity, the tools and research practices developed to support the collaboration, and the mode of collaboration in relation to the historically mutable object of research. As my interest is in the models as interdisciplinary objects, the third research problem seeks to answer my question of how we might characterise these objects, what is typical for them, and what kind of changes happen in the process of modelling. Here I examine the tension between specified, question-oriented models and more general models, and suggest that the specified models form a group of their own. I call these Tailor-made models, in opposition to the process of building a simulation platform that aims at generalisability and utility for health-policy. This tension also underlines the challenge of applying research results (or methods and tools) to discuss and solve problems in decision-making processes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Light interception is a major factor influencing plant development and biomass production. Several methods have been proposed to determine this variable, but its calculation remains difficult in artificial environments with heterogeneous light. We propose a method that uses 3D virtual plant modelling and directional light characterisation to estimate light interception in highly heterogeneous light environments such as growth chambers and glasshouses. Intercepted light was estimated by coupling an architectural model and a light model for different genotypes of the rosette species Arabidopsis thaliana (L.) Heynh and a sunflower crop. The model was applied to plants of contrasting architectures, cultivated in isolation or in canopy, in natural or artificial environments, and under contrasting light conditions. The model gave satisfactory results when compared with observed data and enabled calculation of light interception in situations where direct measurements or classical methods were inefficient, such as young crops, isolated plants or artificial conditions. Furthermore, the model revealed that A. thaliana increased its light interception efficiency when shaded. To conclude, the method can be used to calculate intercepted light at organ, plant and plot levels, in natural and artificial environments, and should be useful in the investigation of genotype-environment interactions for plant architecture and light interception efficiency. This paper originates from a presentation at the 5th International Workshop on Functional–Structural Plant Models, Napier, New Zealand, November 2007.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cereal crops can suffer substantial damage if frosts occur at heading. Identification of post-head-emergence frost (PHEF) resistance in cereals poses a number of unique and difficult challenges. Many decades of research have failed to identify genotypes with PHEF resistance that could offer economically significant benefit to growers. Research and breeding gains have been limited by the available screening systems. Using traditional frost screening systems, genotypes that escape frost injury in trials due to spatial temperature differences and/or small differences in phenology can be misidentified as resistant. We believe that by improving techniques to minimize frost escapes, such ofalse-positive' results can be confidently identified and eliminated. Artificial freezing chambers or manipulated natural frost treatments offer many potential advantages but are not yet at the stage where they can be reliably used for frost screening in breeding programmes. Here we describe the development of a novel photoperiod gradient method (PGM) that facilitates screening of genotypes of different phenology under natural field frosts at matched developmental stages. By identifying frost escapes and increasing the efficiency of field screening, the PGM ensures that research effort can be focused on finding genotypes with improved PHEF resistance. To maximize the likelihood of identifying PHEF resistance, we propose that the PGM form part of an integrated strategy to (i) source germplasm;(ii) facilitate high throughput screening; and (iii) permit detailed validation. PGM may also be useful in other studies where either a range of developmental stages and/or synchronized development are desired.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The driving force behind this study has been the need to develop and apply methods for investigating the hydrogeochemical processes of significance to water management and artificial groundwater recharge. Isotope partitioning of elements in the course of physicochemical processes produces isotopic variations to their natural reservoirs. Tracer property of the stable isotope abundances of oxygen, hydrogen and carbon has been applied to investigate hydrogeological processes in Finland. The work described here has initiated the use of stable isotope methods to achieve a better understanding of these processes in the shallow glacigenic formations of Finland. In addition, the regional precipitation and groundwater records will supplement the data of global precipitation, but as importantly, provide primary background data for hydrological studies. The isotopic composition of oxygen and hydrogen in Finnish groundwaters and atmospheric precipitation was determined in water samples collected during 1995 2005. Prior to this study, no detailed records existed on the spatial or annual variability of the isotopic composition of precipitation or groundwaters in Finland. Groundwaters and precipitation in Finland display a distinct spatial distribution of the isotopic ratios of oxygen and hydrogen. The depletion of the heavier isotopes as a function of increasing latitude is closely related to the local mean surface temperature. No significant differences were observed between the mean annual isotope ratios of oxygen and hydrogen in precipitation and those in local groundwaters. These results suggest that the link between the spatial variability in the isotopic composition of precipitation and local temperature is preserved in groundwaters. Artificial groundwater recharge to glaciogenic sedimentary formations offers many possibilities to apply the isotopic ratios of oxygen, hydrogen and carbon as natural isotopic tracers. In this study the systematics of dissolved carbon have been investigated in two geochemically different glacigenic groundwater formations: a typical esker aquifer at Tuusula, in southern Finland and a carbonate-bearing aquifer with a complex internal structure at Virttaankangas, in southwest Finland. Reducing the concentration of dissolved organic carbon (DOC) in water is a primary challenge in the process of artificial groundwater recharge. The carbon isotope method was used to as a tool to trace the role of redox processes in the decomposition of DOC. At the Tuusula site, artificial recharge leads to a significant decrease in the organic matter content of the infiltrated water. In total, 81% of the initial DOC present in the infiltrated water was removed in three successive stages of subsurface processes. Three distinct processes in the reduction of the DOC content were traced: The decomposition of dissolved organic carbon in the first stage of subsurface flow appeared to be the most significant part in DOC removal, whereas further decrease in DOC has been attributed to adsorption and finally to dilution with local groundwater. Here, isotope methods were used for the first time to quantify the processes of DOC removal in an artificial groundwater recharge. Groundwaters in the Virttaankangas aquifer are characterized by high pH values exceeding 9, which are exceptional for shallow aquifers on glaciated crystalline bedrock. The Virttaankangas sediments were discovered to contain trace amounts of fine grained, dispersed calcite, which has a high tendency to increase the pH of local groundwaters. Understanding the origin of the unusual geochemistry of the Virttaankangas groundwaters is an important issue for constraining the operation of the future artificial groundwater plant. The isotope ratios of oxygen and carbon in sedimentary carbonate minerals have been successfully applied to constrain the origin of the dispersed calcite in the Virttaankangas sediments. The isotopic and chemical characteristics of the groundwater in the distinct units of aquifer were observed to vary depending on the aquifer mineralogy, groundwater residence time and the openness of the system to soil CO2. The high pH values of > 9 have been related to dissolution of calcite into groundwater under closed or nearly closed system conditions relative to soil CO2, at a low partial pressure of CO2.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The present study deals with the application of cluster analysis, Fuzzy Cluster Analysis (FCA) and Kohonen Artificial Neural Networks (KANN) methods for classification of 159 meteorological stations in India into meteorologically homogeneous groups. Eight parameters, namely latitude, longitude, elevation, average temperature, humidity, wind speed, sunshine hours and solar radiation, are considered as the classification criteria for grouping. The optimal number of groups is determined as 14 based on the Davies-Bouldin index approach. It is observed that the FCA approach performed better than the other two methodologies for the present study.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The swelling pressure of soil depends upon various soil parameters such as mineralogy, clay content, Atterberg's limits, dry density, moisture content, initial degree of saturation, etc. along with structural and environmental factors. It is very difficult to model and analyze swelling pressure effectively taking all the above aspects into consideration. Various statistical/empirical methods have been attempted to predict the swelling pressure based on index properties of soil. In this paper, the computational intelligence techniques artificial neural network and support vector machine have been used to develop models based on the set of available experimental results to predict swelling pressure from the inputs; natural moisture content, dry density, liquid limit, plasticity index, and clay fraction. The generalization of the model to new set of data other than the training set of data is discussed which is required for successful application of a model. A detailed study of the relative performance of the computational intelligence techniques has been carried out based on different statistical performance criteria.