982 resultados para Semi-implicit methods


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Ultrasound segmentation is a challenging problem due to the inherent speckle and some artifacts like shadows, attenuation and signal dropout. Existing methods need to include strong priors like shape priors or analytical intensity models to succeed in the segmentation. However, such priors tend to limit these methods to a specific target or imaging settings, and they are not always applicable to pathological cases. This work introduces a semi-supervised segmentation framework for ultrasound imaging that alleviates the limitation of fully automatic segmentation, that is, it is applicable to any kind of target and imaging settings. Our methodology uses a graph of image patches to represent the ultrasound image and user-assisted initialization with labels, which acts as soft priors. The segmentation problem is formulated as a continuous minimum cut problem and solved with an efficient optimization algorithm. We validate our segmentation framework on clinical ultrasound imaging (prostate, fetus, and tumors of the liver and eye). We obtain high similarity agreement with the ground truth provided by medical expert delineations in all applications (94% DICE values in average) and the proposed algorithm performs favorably with the literature.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVE: The principal aim of this study was to develop a Swiss Food Frequency Questionnaire (FFQ) for the elderly population for use in a study to investigate the influence of nutritional factors on bone health. The secondary aim was to assess its validity and both short-term and long-term reproducibility. DESIGN: A 4-day weighed record (4 d WR) was applied to 51 randomly selected women of a mean age of 80.3 years. Subsequently, a detailed FFQ was developed, cross-validated against a further 44 4-d WR, and the short- (1 month, n = 15) and long-term (12 months, n = 14) reproducibility examined. SETTING: French speaking part of Switzerland. SUBJECTS: The subjects were randomly selected women recruited from the Swiss Evaluation of the Methods of Measurement of Osteoporotic Fracture cohort study. RESULTS: Mean energy intakes by 4-d WR and FFQ showed no significant difference [1564.9 kcal (SD 351.1); 1641.3 kcal (SD 523.2) respectively]. Mean crude nutrient intakes were also similar (with nonsignifcant P-values examining the differences in intake) and ranged from 0.13 (potassium) to 0.48 (magnesium). Similar results were found in the reproducibility studies. CONCLUSION: These findings provide evidence that this FFQ adequately estimates nutrient intakes and can be used to rank individuals within distributions of intake in specific populations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: In patients with cervical spine injury, a cervical collar may prevent cervical spine movements but renders tracheal intubation with a standard laryngoscope difficult if not impossible. We hypothesized that despite the presence of a semi-rigid cervical collar and with the patient's head taped to the trolley, we would be able to intubate all patients with the GlideScopeR and its dedicated stylet. Methods: 50 adult patients (ASA 1 or 2, BMI ≤35 kg/m2) scheduled for elective surgical procedures requiring tracheal intubation were included. After standardized induction of general anesthesia and neuromuscular blockade, the neck was immobilized with an appropriately sized semi-rigid Philadelphia Patriot® cervical collar, the head was taped to the trolley. Laryngoscopy was attempted using a Macintosh laryngoscope blade 4 and the modified Cormack Lehane grade was noted. Subsequently, laryngoscopy with the GlideScopeR was graded and followed by oro-tracheal intubation. Results: All patients were successfully intubated with the GlideScopeR and its dedicated stylet. The median intubation time was 50 sec [43; 61]. The modified Cormack Lehane grade was 3 or 4 at direct laryngoscopy. It was significantly reduced with the GlideScopeR (p <0.0001), reaching 2a in most of patients. Maximal mouth opening was significantly reduced with the cervical collar applied, 4.5 cm [4.5; 5.0] vs. 2.0 cm [1.8; 2.0] (p <0.0001). Conclusions: The GlideScope® allows oro-tracheal intubation in patients having their cervical spine immobilized by a semi-rigid collar and their head taped to the trolley. It furthermore decreases significantly the modified Cormack Lehane grade.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This review paper reports the consensus of a technical workshop hosted by the European network, NanoImpactNet (NIN). The workshop aimed to review the collective experience of working at the bench with manufactured nanomaterials (MNMs), and to recommend modifications to existing experimental methods and OECD protocols. Current procedures for cleaning glassware are appropriate for most MNMs, although interference with electrodes may occur. Maintaining exposure is more difficult with MNMs compared to conventional chemicals. A metal salt control is recommended for experiments with metallic MNMs that may release free metal ions. Dispersing agents should be avoided, but if they must be used, then natural or synthetic dispersing agents are possible, and dispersion controls essential. Time constraints and technology gaps indicate that full characterisation of test media during ecotoxicity tests is currently not practical. Details of electron microscopy, dark-field microscopy, a range of spectroscopic methods (EDX, XRD, XANES, EXAFS), light scattering techniques (DLS, SLS) and chromatography are discussed. The development of user-friendly software to predict particle behaviour in test media according to DLVO theory is in progress, and simple optical methods are available to estimate the settling behaviour of suspensions during experiments. However, for soil matrices such simple approaches may not be applicable. Alternatively, a Critical Body Residue approach may be taken in which body concentrations in organisms are related to effects, and toxicity thresholds derived. For microbial assays, the cell wall is a formidable barrier to MNMs and end points that rely on the test substance penetrating the cell may be insensitive. Instead assays based on the cell envelope should be developed for MNMs. In algal growth tests, the abiotic factors that promote particle aggregation in the media (e.g. ionic strength) are also important in providing nutrients, and manipulation of the media to control the dispersion may also inhibit growth. Controls to quantify shading effects, and precise details of lighting regimes, shaking or mixing should be reported in algal tests. Photosynthesis may be more sensitive than traditional growth end points for algae and plants. Tests with invertebrates should consider non-chemical toxicity from particle adherence to the organisms. The use of semi-static exposure methods with fish can reduce the logistical issues of waste water disposal and facilitate aspects of animal husbandry relevant to MMNs. There are concerns that the existing bioaccumulation tests are conceptually flawed for MNMs and that new test(s) are required. In vitro testing strategies, as exemplified by genotoxicity assays, can be modified for MNMs, but the risk of false negatives in some assays is highlighted. In conclusion, most protocols will require some modifications and recommendations are made to aid the researcher at the bench. [Authors]

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Segmenting ultrasound images is a challenging problemwhere standard unsupervised segmentation methods such asthe well-known Chan-Vese method fail. We propose in thispaper an efficient segmentation method for this class ofimages. Our proposed algorithm is based on asemi-supervised approach (user labels) and the use ofimage patches as data features. We also consider thePearson distance between patches, which has been shown tobe robust w.r.t speckle noise present in ultrasoundimages. Our results on phantom and clinical data show avery high similarity agreement with the ground truthprovided by a medical expert.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A semisupervised support vector machine is presented for the classification of remote sensing images. The method exploits the wealth of unlabeled samples for regularizing the training kernel representation locally by means of cluster kernels. The method learns a suitable kernel directly from the image and thus avoids assuming a priori signal relations by using a predefined kernel structure. Good results are obtained in image classification examples when few labeled samples are available. The method scales almost linearly with the number of unlabeled samples and provides out-of-sample predictions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fluvial deposits are a challenge for modelling flow in sub-surface reservoirs. Connectivity and continuity of permeable bodies have a major impact on fluid flow in porous media. Contemporary object-based and multipoint statistics methods face a problem of robust representation of connected structures. An alternative approach to model petrophysical properties is based on machine learning algorithm ? Support Vector Regression (SVR). Semi-supervised SVR is able to establish spatial connectivity taking into account the prior knowledge on natural similarities. SVR as a learning algorithm is robust to noise and captures dependencies from all available data. Semi-supervised SVR applied to a synthetic fluvial reservoir demonstrated robust results, which are well matched to the flow performance

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Gaseous N losses from soil are considerable, resulting mostly from ammonia volatilization linked to agricultural activities such as pasture fertilization. The use of simple and accessible measurement methods of such losses is fundamental in the evaluation of the N cycle in agricultural systems. The purpose of this study was to evaluate quantification methods of NH3 volatilization from fertilized surface soil with urea, with minimal influence on the volatilization processes. The greenhouse experiment was arranged in a completely randomized design with 13 treatments and five replications, with the following treatments: (1) Polyurethane foam (density 20 kg m-3) with phosphoric acid solution absorber (foam absorber), installed 1, 5, 10 and 20 cm above the soil surface; (2) Paper filter with sulfuric acid solution absorber (paper absorber, 1, 5, 10 and 20 cm above the soil surface); (3) Sulfuric acid solution absorber (1, 5 and 10 cm above the soil surface); (4) Semi-open static collector; (5) 15N balance (control). The foam absorber placed 1 cm above the soil surface estimated the real daily rate of loss and accumulated loss of NH3N and proved efficient in capturing NH3 volatized from urea-treated soil. The estimates based on acid absorbers 1, 5 and 10 cm above the soil surface and paper absorbers 1 and 5 cm above the soil surface were only realistic for accumulated N-NH3 losses. Foam absorbers can be indicated to quantify accumulated and daily rates of NH3 volatilization losses similarly to an open static chamber, making calibration equations or correction factors unnecessary.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Résumé Suite aux recentes avancées technologiques, les archives d'images digitales ont connu une croissance qualitative et quantitative sans précédent. Malgré les énormes possibilités qu'elles offrent, ces avancées posent de nouvelles questions quant au traitement des masses de données saisies. Cette question est à la base de cette Thèse: les problèmes de traitement d'information digitale à très haute résolution spatiale et/ou spectrale y sont considérés en recourant à des approches d'apprentissage statistique, les méthodes à noyau. Cette Thèse étudie des problèmes de classification d'images, c'est à dire de catégorisation de pixels en un nombre réduit de classes refletant les propriétés spectrales et contextuelles des objets qu'elles représentent. L'accent est mis sur l'efficience des algorithmes, ainsi que sur leur simplicité, de manière à augmenter leur potentiel d'implementation pour les utilisateurs. De plus, le défi de cette Thèse est de rester proche des problèmes concrets des utilisateurs d'images satellite sans pour autant perdre de vue l'intéret des méthodes proposées pour le milieu du machine learning dont elles sont issues. En ce sens, ce travail joue la carte de la transdisciplinarité en maintenant un lien fort entre les deux sciences dans tous les développements proposés. Quatre modèles sont proposés: le premier répond au problème de la haute dimensionalité et de la redondance des données par un modèle optimisant les performances en classification en s'adaptant aux particularités de l'image. Ceci est rendu possible par un système de ranking des variables (les bandes) qui est optimisé en même temps que le modèle de base: ce faisant, seules les variables importantes pour résoudre le problème sont utilisées par le classifieur. Le manque d'information étiquétée et l'incertitude quant à sa pertinence pour le problème sont à la source des deux modèles suivants, basés respectivement sur l'apprentissage actif et les méthodes semi-supervisées: le premier permet d'améliorer la qualité d'un ensemble d'entraînement par interaction directe entre l'utilisateur et la machine, alors que le deuxième utilise les pixels non étiquetés pour améliorer la description des données disponibles et la robustesse du modèle. Enfin, le dernier modèle proposé considère la question plus théorique de la structure entre les outputs: l'intègration de cette source d'information, jusqu'à présent jamais considérée en télédétection, ouvre des nouveaux défis de recherche. Advanced kernel methods for remote sensing image classification Devis Tuia Institut de Géomatique et d'Analyse du Risque September 2009 Abstract The technical developments in recent years have brought the quantity and quality of digital information to an unprecedented level, as enormous archives of satellite images are available to the users. However, even if these advances open more and more possibilities in the use of digital imagery, they also rise several problems of storage and treatment. The latter is considered in this Thesis: the processing of very high spatial and spectral resolution images is treated with approaches based on data-driven algorithms relying on kernel methods. In particular, the problem of image classification, i.e. the categorization of the image's pixels into a reduced number of classes reflecting spectral and contextual properties, is studied through the different models presented. The accent is put on algorithmic efficiency and the simplicity of the approaches proposed, to avoid too complex models that would not be used by users. The major challenge of the Thesis is to remain close to concrete remote sensing problems, without losing the methodological interest from the machine learning viewpoint: in this sense, this work aims at building a bridge between the machine learning and remote sensing communities and all the models proposed have been developed keeping in mind the need for such a synergy. Four models are proposed: first, an adaptive model learning the relevant image features has been proposed to solve the problem of high dimensionality and collinearity of the image features. This model provides automatically an accurate classifier and a ranking of the relevance of the single features. The scarcity and unreliability of labeled. information were the common root of the second and third models proposed: when confronted to such problems, the user can either construct the labeled set iteratively by direct interaction with the machine or use the unlabeled data to increase robustness and quality of the description of data. Both solutions have been explored resulting into two methodological contributions, based respectively on active learning and semisupervised learning. Finally, the more theoretical issue of structured outputs has been considered in the last model, which, by integrating outputs similarity into a model, opens new challenges and opportunities for remote sensing image processing.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We show how nonlinear embedding algorithms popular for use with shallow semi-supervised learning techniques such as kernel methods can be applied to deep multilayer architectures, either as a regularizer at the output layer, or on each layer of the architecture. This provides a simple alternative to existing approaches to deep learning whilst yielding competitive error rates compared to those methods, and existing shallow semi-supervised techniques.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: This trial was conducted to evaluate the safety and immunogenicity of two virosome formulated malaria peptidomimetics derived from Plasmodium falciparum AMA-1 and CSP in malaria semi-immune adults and children.Methods: The design was a prospective randomized, double-blind, controlled, age-deescalating study with two immunizations. 10 adults and 40 children (aged 5-9 years) living in a malaria endemic area were immunized with PEV3B or virosomal influenza vaccine Inflexal (R) V on day 0 and 90.Results: No serious or severe adverse events (AEs) related to the vaccines were observed. The only local solicited AE reported was pain at injection site, which affected more children in the Inflexal (R) V group compared to the PEV3B group (p = 0.014). In the PEV3B group, IgG ELISA endpoint titers specific for the AMA-1 and CSP peptide antigens were significantly higher for most time points compared to the Inflexal (R) V control group. Across all time points after first immunization the average ratio of endpoint titers to baseline values in PEV3B subjects ranged from 4 to 15 in adults and from 4 to 66 in children. As an exploratory outcome, we found that the incidence rate of clinical malaria episodes in children vaccinees was half the rate of the control children between study days 30 and 365 (0.0035 episodes per day at risk for PEV3B vs. 0.0069 for Inflexal (R) V; RR = 0.50 [95%-CI: 0.29-0.88], p = 0.02).Conclusion: These findings provide a strong basis for the further development of multivalent virosomal malaria peptide vaccines.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Blowing and drifting of snow is a major concern for transportation efficiency and road safety in regions where their development is common. One common way to mitigate snow drift on roadways is to install plastic snow fences. Correct design of snow fences is critical for road safety and maintaining the roads open during winter in the US Midwest and other states affected by large snow events during the winter season and to maintain costs related to accumulation of snow on the roads and repair of roads to minimum levels. Of critical importance for road safety is the protection against snow drifting in regions with narrow rights of way, where standard fences cannot be deployed at the recommended distance from the road. Designing snow fences requires sound engineering judgment and a thorough evaluation of the potential for snow blowing and drifting at the construction site. The evaluation includes site-specific design parameters typically obtained with semi-empirical relations characterizing the local transport conditions. Among the critical parameters involved in fence design and assessment of their post-construction efficiency is the quantification of the snow accumulation at fence sites. The present study proposes a joint experimental and numerical approach to monitor snow deposits around snow fences, quantitatively estimate snow deposits in the field, asses the efficiency and improve the design of snow fences. Snow deposit profiles were mapped using GPS based real-time kinematic surveys (RTK) conducted at the monitored field site during and after snow storms. The monitored site allowed testing different snow fence designs under close to identical conditions over four winter seasons. The study also discusses the detailed monitoring system and analysis of weather forecast and meteorological conditions at the monitored sites. A main goal of the present study was to assess the performance of lightweight plastic snow fences with a lower porosity than the typical 50% porosity used in standard designs of such fences. The field data collected during the first winter was used to identify the best design for snow fences with a porosity of 50%. Flow fields obtained from numerical simulations showed that the fence design that worked the best during the first winter induced the formation of an elongated area of small velocity magnitude close to the ground. This information was used to identify other candidates for optimum design of fences with a lower porosity. Two of the designs with a fence porosity of 30% that were found to perform well based on results of numerical simulations were tested in the field during the second winter along with the best performing design for fences with a porosity of 50%. Field data showed that the length of the snow deposit away from the fence was reduced by about 30% for the two proposed lower-porosity (30%) fence designs compared to the best design identified for fences with a porosity of 50%. Moreover, one of the lower-porosity designs tested in the field showed no significant snow deposition within the bottom gap region beneath the fence. Thus, a major outcome of this study is to recommend using plastic snow fences with a porosity of 30%. It is expected that this lower-porosity design will continue to work well for even more severe snow events or for successive snow events occurring during the same winter. The approach advocated in the present study allowed making general recommendations for optimizing the design of lower-porosity plastic snow fences. This approach can be extended to improve the design of other types of snow fences. Some preliminary work for living snow fences is also discussed. Another major contribution of this study is to propose, develop protocols and test a novel technique based on close range photogrammetry (CRP) to quantify the snow deposits trapped snow fences. As image data can be acquired continuously, the time evolution of the volume of snow retained by a snow fence during a storm or during a whole winter season can, in principle, be obtained. Moreover, CRP is a non-intrusive method that eliminates the need to perform man-made measurements during the storms, which are difficult and sometimes dangerous to perform. Presently, there is lots of empiricism in the design of snow fences due to lack of data on fence storage capacity on how snow deposits change with the fence design and snow storm characteristics and in the estimation of the main parameters used by the state DOTs to design snow fences at a given site. The availability of such information from CRP measurements should provide critical data for the evaluation of the performance of a certain snow fence design that is tested by the IDOT. As part of the present study, the novel CRP method is tested at several sites. The present study also discusses some attempts and preliminary work to determine the snow relocation coefficient which is one of the main variables that has to be estimated by IDOT engineers when using the standard snow fence design software (Snow Drift Profiler, Tabler, 2006). Our analysis showed that standard empirical formulas did not produce reasonable values when applied at the Iowa test sites monitored as part of the present study and that simple methods to estimate this variable are not reliable. The present study makes recommendations for the development of a new methodology based on Large Scale Particle Image Velocimetry that can directly measure the snow drift fluxes and the amount of snow relocated by the fence.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we present a computer simulation study of the ion binding process at an ionizable surface using a semi-grand canonical Monte Carlo method that models the surface as a discrete distribution of charged and neutral functional groups in equilibrium with explicit ions modelled in the context of the primitive model. The parameters of the simulation model were tuned and checked by comparison with experimental titrations of carboxylated latex particles in the presence of different ionic strengths of monovalent ions. The titration of these particles was analysed by calculating the degree of dissociation of the latex functional groups vs. pH curves at different background salt concentrations. As the charge of the titrated surface changes during the simulation, a procedure to keep the electroneutrality of the system is required. Here, two approaches are used with the choice depending on the ion selected to maintain electroneutrality: counterion or coion procedures. We compare and discuss the difference between the procedures. The simulations also provided a microscopic description of the electrostatic double layer (EDL) structure as a function of pH and ionic strength. The results allow us to quantify the effect of the size of the background salt ions and of the surface functional groups on the degree of dissociation. The non-homogeneous structure of the EDL was revealed by plotting the counterion density profiles around charged and neutral surface functional groups. © 2011 American Institute of Physics.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: Endurance athletes are advised to optimize nutrition prior to races. Little is known about actual athletes' beliefs, knowledge and nutritional behaviour. We monitored nutritional behaviour of amateur ski-mountaineering athletes during 4 days prior to a major competition to compare it with official recommendations and with the athletes' beliefs. METHODS: Participants to the two routes of the 'Patrouille des Glaciers' were recruited (A, 26 km, ascent 1881 m, descent 2341 m, max altitude 3160 m; Z, 53 km, ascent 3994 m, descent 4090 m, max altitude 3650 m). Dietary intake diaries of 40 athletes (21 A, 19 Z) were analysed for energy, carbohydrate, fat, protein and liquid; ten were interviewed about their pre-race nutritional beliefs and behaviour. RESULTS: Despite belief that pre-race carbohydrate, energy and fluid intake should be increased, energy consumption was 2416 ± 696 (mean ± SD) kcal · day(-1), 83 ± 17 % of recommended intake, carbohydrate intake was only 46 ± 13 % of minimal recommended (10 g · kg(-1) · day(-1)) and fluid intake only 2.7 ± 1.0 l · day(-1). CONCLUSIONS: Our sample of endurance athletes did not comply with pre-race nutritional recommendations despite elementary knowledge and belief to be compliant. In these athletes a clear and reflective nutritional strategy was lacking. This suggests a potential for improving knowledge and compliance with recommendations. Alternatively, some recommendations may be unrealistic.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Open chain hydroxamic acid (Hx) can exist as Z and E diastereomers of two tautomers, hydroxamic acid and hydroximic acid. The conformational stability of the formohydroxamic acid isomers evaluated by PM3 compared better to ab initio results from the literature than AM1 results. Structural data of the cyclic Hx 2,4-dihydroxy-7-metoxy-2H-1,4-benzoxazin-3(4 H)-one (DIMBOA) obtained by both semiempirical methods compared well to ab initio results. pKa data from the literature for derivatives of the aldolic isomer of DIMBOA were compared to the stability of the anions resulting from the loss of protons of their phenol and hydroxamic acid groups, determined as the difference in heat of formation between anionic and neutral forms, calculated by AM1 and PM3 methods. Good correlations between theoretical and experimental data were obtained for both semiempirical methods.