21 resultados para CFD, computer modelling, DEM, sugar processing
em Université de Lausanne, Switzerland
Resumo:
We propose a novel compressed sensing technique to accelerate the magnetic resonance imaging (MRI) acquisition process. The method, coined spread spectrum MRI or simply s(2)MRI, consists of premodulating the signal of interest by a linear chirp before random k-space under-sampling, and then reconstructing the signal with nonlinear algorithms that promote sparsity. The effectiveness of the procedure is theoretically underpinned by the optimization of the coherence between the sparsity and sensing bases. The proposed technique is thoroughly studied by means of numerical simulations, as well as phantom and in vivo experiments on a 7T scanner. Our results suggest that s(2)MRI performs better than state-of-the-art variable density k-space under-sampling approaches.
Resumo:
PURPOSE: To compare volume-targeted and whole-heart coronary magnetic resonance angiography (MRA) after the administration of an intravascular contrast agent. MATERIALS AND METHODS: Six healthy adult subjects underwent a navigator-gated and -corrected (NAV) free breathing volume-targeted cardiac-triggered inversion recovery (IR) 3D steady-state free precession (SSFP) coronary MRA sequence (t-CMRA) (spatial resolution = 1 x 1 x 3 mm(3)) and high spatial resolution IR 3D SSFP whole-heart coronary MRA (WH-CMRA) (spatial resolution = 1 x 1 x 2 mm(3)) after the administration of an intravascular contrast agent B-22956. Subjective and objective image quality parameters including maximal visible vessel length, vessel sharpness, and visibility of coronary side branches were evaluated for both t-CMRA and WH-CMRA. RESULTS: No significant differences (P = NS) in image quality were observed between contrast-enhanced t-CMRA and WH-CMRA. However, using an intravascular contrast agent, significantly longer vessel segments were measured on WH-CMRA vs. t-CMRA (right coronary artery [RCA] 13.5 +/- 0.7 cm vs. 12.5 +/- 0.2 cm; P < 0.05; and left circumflex coronary artery [LCX] 11.9 +/- 2.2 cm vs. 6.9 +/- 2.4 cm; P < 0.05). Significantly more side branches (13.3 +/- 1.2 vs. 8.7 +/- 1.2; P < 0.05) were visible for the left anterior descending coronary artery (LAD) on WH-CMRA vs. t-CMRA. Scanning time and navigator efficiency were similar for both techniques (t-CMRA: 6.05 min; 49% vs. WH-CMRA: 5.51 min; 54%, both P = NS). CONCLUSION: Both WH-CMRA and t-CMRA using SSFP are useful techniques for coronary MRA after the injection of an intravascular blood-pool agent. However, the vessel conspicuity for high spatial resolution WH-CMRA is not inferior to t-CMRA, while visible vessel length and the number of visible smaller-diameter vessels and side-branches are improved.
Resumo:
La biologie de la conservation est communément associée à la protection de petites populations menacées d?extinction. Pourtant, il peut également être nécessaire de soumettre à gestion des populations surabondantes ou susceptibles d?une trop grande expansion, dans le but de prévenir les effets néfastes de la surpopulation. Du fait des différences tant quantitatives que qualitatives entre protection des petites populations et contrôle des grandes, il est nécessaire de disposer de modèles et de méthodes distinctes. L?objectif de ce travail a été de développer des modèles prédictifs de la dynamique des grandes populations, ainsi que des logiciels permettant de calculer les paramètres de ces modèles et de tester des scénarios de gestion. Le cas du Bouquetin des Alpes (Capra ibex ibex) - en forte expansion en Suisse depuis sa réintroduction au début du XXème siècle - servit d?exemple. Cette tâche fut accomplie en trois étapes : En premier lieu, un modèle de dynamique locale, spécifique au Bouquetin, fut développé : le modèle sous-jacent - structuré en classes d?âge et de sexe - est basé sur une matrice de Leslie à laquelle ont été ajoutées la densité-dépendance, la stochasticité environnementale et la chasse de régulation. Ce modèle fut implémenté dans un logiciel d?aide à la gestion - nommé SIM-Ibex - permettant la maintenance de données de recensements, l?estimation automatisée des paramètres, ainsi que l?ajustement et la simulation de stratégies de régulation. Mais la dynamique d?une population est influencée non seulement par des facteurs démographiques, mais aussi par la dispersion et la colonisation de nouveaux espaces. Il est donc nécessaire de pouvoir modéliser tant la qualité de l?habitat que les obstacles à la dispersion. Une collection de logiciels - nommée Biomapper - fut donc développée. Son module central est basé sur l?Analyse Factorielle de la Niche Ecologique (ENFA) dont le principe est de calculer des facteurs de marginalité et de spécialisation de la niche écologique à partir de prédicteurs environnementaux et de données d?observation de l?espèce. Tous les modules de Biomapper sont liés aux Systèmes d?Information Géographiques (SIG) ; ils couvrent toutes les opérations d?importation des données, préparation des prédicteurs, ENFA et calcul de la carte de qualité d?habitat, validation et traitement des résultats ; un module permet également de cartographier les barrières et les corridors de dispersion. Le domaine d?application de l?ENFA fut exploré par le biais d?une distribution d?espèce virtuelle. La comparaison à une méthode couramment utilisée pour construire des cartes de qualité d?habitat, le Modèle Linéaire Généralisé (GLM), montra qu?elle était particulièrement adaptée pour les espèces cryptiques ou en cours d?expansion. Les informations sur la démographie et le paysage furent finalement fusionnées en un modèle global. Une approche basée sur un automate cellulaire fut choisie, tant pour satisfaire aux contraintes du réalisme de la modélisation du paysage qu?à celles imposées par les grandes populations : la zone d?étude est modélisée par un pavage de cellules hexagonales, chacune caractérisée par des propriétés - une capacité de soutien et six taux d?imperméabilité quantifiant les échanges entre cellules adjacentes - et une variable, la densité de la population. Cette dernière varie en fonction de la reproduction et de la survie locale, ainsi que de la dispersion, sous l?influence de la densité-dépendance et de la stochasticité. Un logiciel - nommé HexaSpace - fut développé pour accomplir deux fonctions : 1° Calibrer l?automate sur la base de modèles de dynamique (par ex. calculés par SIM-Ibex) et d?une carte de qualité d?habitat (par ex. calculée par Biomapper). 2° Faire tourner des simulations. Il permet d?étudier l?expansion d?une espèce envahisseuse dans un paysage complexe composé de zones de qualité diverses et comportant des obstacles à la dispersion. Ce modèle fut appliqué à l?histoire de la réintroduction du Bouquetin dans les Alpes bernoises (Suisse). SIM-Ibex est actuellement utilisé par les gestionnaires de la faune et par les inspecteurs du gouvernement pour préparer et contrôler les plans de tir. Biomapper a été appliqué à plusieurs espèces (tant végétales qu?animales) à travers le Monde. De même, même si HexaSpace fut initialement conçu pour des espèces animales terrestres, il pourrait aisément être étndu à la propagation de plantes ou à la dispersion d?animaux volants. Ces logiciels étant conçus pour, à partir de données brutes, construire un modèle réaliste complexe, et du fait qu?ils sont dotés d?une interface d?utilisation intuitive, ils sont susceptibles de nombreuses applications en biologie de la conservation. En outre, ces approches peuvent également s?appliquer à des questions théoriques dans les domaines de l?écologie des populations et du paysage.<br/><br/>Conservation biology is commonly associated to small and endangered population protection. Nevertheless, large or potentially large populations may also need human management to prevent negative effects of overpopulation. As there are both qualitative and quantitative differences between small population protection and large population controlling, distinct methods and models are needed. The aim of this work was to develop theoretical models to predict large population dynamics, as well as computer tools to assess the parameters of these models and to test management scenarios. The alpine Ibex (Capra ibex ibex) - which experienced a spectacular increase since its reintroduction in Switzerland at the beginning of the 20th century - was used as paradigm species. This task was achieved in three steps: A local population dynamics model was first developed specifically for Ibex: the underlying age- and sex-structured model is based on a Leslie matrix approach with addition of density-dependence, environmental stochasticity and culling. This model was implemented into a management-support software - named SIM-Ibex - allowing census data maintenance, parameter automated assessment and culling strategies tuning and simulating. However population dynamics is driven not only by demographic factors, but also by dispersal and colonisation of new areas. Habitat suitability and obstacles modelling had therefore to be addressed. Thus, a software package - named Biomapper - was developed. Its central module is based on the Ecological Niche Factor Analysis (ENFA) whose principle is to compute niche marginality and specialisation factors from a set of environmental predictors and species presence data. All Biomapper modules are linked to Geographic Information Systems (GIS); they cover all operations of data importation, predictor preparation, ENFA and habitat suitability map computation, results validation and further processing; a module also allows mapping of dispersal barriers and corridors. ENFA application domain was then explored by means of a simulated species distribution. It was compared to a common habitat suitability assessing method, the Generalised Linear Model (GLM), and was proven better suited for spreading or cryptic species. Demography and landscape informations were finally merged into a global model. To cope with landscape realism and technical constraints of large population modelling, a cellular automaton approach was chosen: the study area is modelled by a lattice of hexagonal cells, each one characterised by a few fixed properties - a carrying capacity and six impermeability rates quantifying exchanges between adjacent cells - and one variable, population density. The later varies according to local reproduction/survival and dispersal dynamics, modified by density-dependence and stochasticity. A software - named HexaSpace - was developed, which achieves two functions: 1° Calibrating the automaton on the base of local population dynamics models (e.g., computed by SIM-Ibex) and a habitat suitability map (e.g. computed by Biomapper). 2° Running simulations. It allows studying the spreading of an invading species across a complex landscape made of variously suitable areas and dispersal barriers. This model was applied to the history of Ibex reintroduction in Bernese Alps (Switzerland). SIM-Ibex is now used by governmental wildlife managers to prepare and verify culling plans. Biomapper has been applied to several species (both plants and animals) all around the World. In the same way, whilst HexaSpace was originally designed for terrestrial animal species, it could be easily extended to model plant propagation or flying animals dispersal. As these softwares were designed to proceed from low-level data to build a complex realistic model and as they benefit from an intuitive user-interface, they may have many conservation applications. Moreover, theoretical questions in the fields of population and landscape ecology might also be addressed by these approaches.
Resumo:
1. Statistical modelling is often used to relate sparse biological survey data to remotely derived environmental predictors, thereby providing a basis for predictively mapping biodiversity across an entire region of interest. The most popular strategy for such modelling has been to model distributions of individual species one at a time. Spatial modelling of biodiversity at the community level may, however, confer significant benefits for applications involving very large numbers of species, particularly if many of these species are recorded infrequently. 2. Community-level modelling combines data from multiple species and produces information on spatial pattern in the distribution of biodiversity at a collective community level instead of, or in addition to, the level of individual species. Spatial outputs from community-level modelling include predictive mapping of community types (groups of locations with similar species composition), species groups (groups of species with similar distributions), axes or gradients of compositional variation, levels of compositional dissimilarity between pairs of locations, and various macro-ecological properties (e.g. species richness). 3. Three broad modelling strategies can be used to generate these outputs: (i) 'assemble first, predict later', in which biological survey data are first classified, ordinated or aggregated to produce community-level entities or attributes that are then modelled in relation to environmental predictors; (ii) 'predict first, assemble later', in which individual species are modelled one at a time as a function of environmental variables, to produce a stack of species distribution maps that is then subjected to classification, ordination or aggregation; and (iii) 'assemble and predict together', in which all species are modelled simultaneously, within a single integrated modelling process. These strategies each have particular strengths and weaknesses, depending on the intended purpose of modelling and the type, quality and quantity of data involved. 4. Synthesis and applications. The potential benefits of modelling large multispecies data sets using community-level, as opposed to species-level, approaches include faster processing, increased power to detect shared patterns of environmental response across rarely recorded species, and enhanced capacity to synthesize complex data into a form more readily interpretable by scientists and decision-makers. Community-level modelling therefore deserves to be considered more often, and more widely, as a potential alternative or supplement to modelling individual species.
Resumo:
Computer simulations on a new model of the alpha1b-adrenergic receptor based on the crystal structure of rhodopsin have been combined with experimental mutagenesis to investigate the role of residues in the cytosolic half of helix 6 in receptor activation. Our results support the hypothesis that a salt bridge between the highly conserved arginine (R143(3.50)) of the E/DRY motif of helix 3 and a conserved glutamate (E289(6.30)) on helix 6 constrains the alpha1b-AR in the inactive state. In fact, mutations of E289(6.30) that weakened the R143(3.50)-E289(6.30) interaction constitutively activated the receptor. The functional effect of mutating other amino acids on helix 6 (F286(6.27), A292(6.33), L296(6.37), V299(6.40,) V300(6.41), and F303(6.44)) correlates with the extent of their interaction with helix 3 and in particular with R143(3.50) of the E/DRY sequence.
Resumo:
Human electrophysiological studies support a model whereby sensitivity to so-called illusory contour stimuli is first seen within the lateral occipital complex. A challenge to this model posits that the lateral occipital complex is a general site for crude region-based segmentation, based on findings of equivalent hemodynamic activations in the lateral occipital complex to illusory contour and so-called salient region stimuli, a stimulus class that lacks the classic bounding contours of illusory contours. Using high-density electrical mapping of visual evoked potentials, we show that early lateral occipital cortex activity is substantially stronger to illusory contour than to salient region stimuli, whereas later lateral occipital complex activity is stronger to salient region than to illusory contour stimuli. Our results suggest that equivalent hemodynamic activity to illusory contour and salient region stimuli probably reflects temporally integrated responses, a result of the poor temporal resolution of hemodynamic imaging. The temporal precision of visual evoked potentials is critical for establishing viable models of completion processes and visual scene analysis. We propose that crude spatial segmentation analyses, which are insensitive to illusory contours, occur first within dorsal visual regions, not the lateral occipital complex, and that initial illusory contour sensitivity is a function of the lateral occipital complex.
Resumo:
Since 1986, several near-vertical seismic reflection profiles have been recorded in Switzerland in order to map the deep geologic structure of the Alps. One objective of this endeavour has been to determine the geometries of the autochthonous basement and of the external crystalline massifs, important elements for understanding the geodynamics of the Alpine orogeny. The PNR-20 seismic line W1, located in the Rawil depression of the western Swiss Alps, provides important information on this subject. It extends northward from the `'Penninic front'' across the Helvetic nappes to the Prealps. The crystalline massifs do not outcrop along this profile. Thus, the interpretation of `'near-basement'' reflections has to be constrained by down-dip projections of surface geology, `'true amplitude'' processing, rock physical property studies and modelling. 3-D seismic modelling has been used to evaluate the seismic response of two alternative down-dip projection models. To constrain the interpretation in the southern part of the profile, `'true amplitude'' processing has provided information on the strength of the reflections. Density and velocity measurements on core samples collected up-dip from the region of the seismic line have been used to evaluate reflection coefficients of typical lithologic boundaries in the region. The cover-basement contact itself is not a source of strong reflections, but strong reflections arise from within the overlaying metasedimentary cover sequence, allowing the geometry of the top of the basement to be determined on the basis of `'near-basement'' reflections. The front of the external crystalline massifs is shown to extend beneath the Prealps, about 6 km north of the expected position. A 2-D model whose seismic response shows reflection patterns very similar to the observed is proposed.
Resumo:
Background: Excessive exposure to solar Ultra-Violet (UV) light is the main cause of most skin cancers in humans. Factors such as the increase of solar irradiation at ground level (anthropic pollution), the rise in standard of living (vacation in sunny areas), and (mostly) the development of outdoor activities have contributed to increase exposure. Thus, unsurprisingly, incidence of skin cancers has increased over the last decades more than that of any other cancer. Melanoma is the most lethal cutaneous cancer, while cutaneous carcinomas are the most common cancer type worldwide. UV exposure depends on environmental as well as individual factors related to activity. The influence of individual factors on exposure among building workers was investigated in a previous study. Posture and orientation were found to account for at least 38% of the total variance of relative individual exposure. A high variance of short-term exposure was observed between different body locations, indicating the occurrence of intense, subacute exposures. It was also found that effective short-term exposure ranged between 0 and 200% of ambient irradiation, suggesting that ambient irradiation is a poor predictor of effective exposure. Various dosimetric techniques enable to assess individual effective exposure, but dosimetric measurements remain tedious and tend to be situation-specific. As a matter of facts, individual factors (exposure time, body posture and orientation in the sun) often limit the extrapolation of exposure results to similar activities conducted in other conditions. Objective: The research presented in this paper aims at developing and validating a predictive tool of effective individual exposure to solar UV. Methods: Existing computer graphic techniques (3D rendering) were adapted to reflect solar exposure conditions and calculate short-term anatomical doses. A numerical model, represented as a 3D triangular mesh, is used to represent the exposed body. The amount of solar energy received by each "triangle is calculated, taking into account irradiation intensity, incidence angle and possible shadowing from other body parts. The model take into account the three components of the solar irradiation (direct, diffuse and albedo) as well as the orientation and posture of the body. Field measurements were carried out using a forensic mannequin at the Payerne MeteoSwiss station. Short-term dosimetric measurements were performed in 7 anatomical locations for 5 body postures. Field results were compared to the model prediction obtained from the numerical model. Results: The best match between prediction and measurements was obtained for upper body parts such as shoulders (Ratio Modelled/Measured; Mean = 1.21, SD = 0.34) and neck (Mean = 0.81, SD = 0.32). Small curved body parts such as forehead (Mean = 6.48, SD = 9.61) exhibited a lower matching. The prediction is less accurate for complex postures such as kneeling (Mean = 4.13, SD = 8.38) compared to standing up (Mean = 0.85, SD = 0.48). The values obtained from the dosimeters and the ones computed from the model are globally consistent. Conclusion: Although further development and validation are required, these results suggest that effective exposure could be predicted for a given activity (work or leisure) in various ambient irradiation conditions. Using a generic modelling approach is of high interest in terms of implementation costs as well as predictive and retrospective capabilities.
Resumo:
The development of susceptibility maps for debris flows is of primary importance due to population pressure in hazardous zones. However, hazard assessment by processbased modelling at a regional scale is difficult due to the complex nature of the phenomenon, the variability of local controlling factors, and the uncertainty in modelling parameters. A regional assessment must consider a simplified approach that is not highly parameter dependant and that can provide zonation with minimum data requirements. A distributed empirical model has thus been developed for regional susceptibility assessments using essentially a digital elevation model (DEM). The model is called Flow-R for Flow path assessment of gravitational hazards at a Regional scale (available free of charge under www.flow-r.org) and has been successfully applied to different case studies in various countries with variable data quality. It provides a substantial basis for a preliminary susceptibility assessment at a regional scale. The model was also found relevant to assess other natural hazards such as rockfall, snow avalanches and floods. The model allows for automatic source area delineation, given user criteria, and for the assessment of the propagation extent based on various spreading algorithms and simple frictional laws.We developed a new spreading algorithm, an improved version of Holmgren's direction algorithm, that is less sensitive to small variations of the DEM and that is avoiding over-channelization, and so produces more realistic extents. The choices of the datasets and the algorithms are open to the user, which makes it compliant for various applications and dataset availability. Amongst the possible datasets, the DEM is the only one that is really needed for both the source area delineation and the propagation assessment; its quality is of major importance for the results accuracy. We consider a 10m DEM resolution as a good compromise between processing time and quality of results. However, valuable results have still been obtained on the basis of lower quality DEMs with 25m resolution.
Resumo:
Cobalt-labelled motoneuron dendrites of the frog spinal cord at the level of the second spinal nerve were photographed in the electron microscope from long series of ultrathin sections. Three-dimensional computer reconstructions of 120 dendrite segments were analysed. The samples were taken from two locations: proximal to cell body and distal, as defined in a transverse plane of the spinal cord. The dendrites showed highly irregular outlines with many 1-2 microns-long 'thorns' (on average 8.5 thorns per 100 microns 2 of dendritic area). Taken together, the reconstructed dendrite segments from the proximal sites had a total length of about 250 microns; those from the distal locations, 180 microns. On all segments together there were 699 synapses. Nine percent of the synapses were on thorns, and many more close to their base on the dendritic shaft. The synapses were classified in four groups. One third of the synapses were asymmetric with spherical vesicles; one half were symmetric with spherical vesicles; and one tenth were symmetric with flattened vesicles. A fourth, small class of asymmetric synapses had dense-core vesicles. The area of the active zones was large for the asymmetric synapses (median value 0.20 microns 2), and small for the symmetric ones (median value 0.10 microns 2), and the difference was significant. On average, the areas of the active zones of the synapses on thin dendrites were larger than those of synapses on large calibre dendrites. About every 4 microns 2 of dendritic area received one contact. There was a significant difference between the areas of the active zones of the synapses at the two locations. Moreover, the number per unit dendritic length was correlated with dendrite calibre. On average, the active zones covered more than 4% of the dendritic area; this value for thin dendrites was about twice as large as that of large calibre dendrites. We suggest that the larger active zones and the larger synaptic coverage of the thin dendrites compensate for the longer electrotonic distance of these synapses from the soma.
Resumo:
BACKGROUND: Qualitative frameworks, especially those based on the logical discrete formalism, are increasingly used to model regulatory and signalling networks. A major advantage of these frameworks is that they do not require precise quantitative data, and that they are well-suited for studies of large networks. While numerous groups have developed specific computational tools that provide original methods to analyse qualitative models, a standard format to exchange qualitative models has been missing. RESULTS: We present the Systems Biology Markup Language (SBML) Qualitative Models Package ("qual"), an extension of the SBML Level 3 standard designed for computer representation of qualitative models of biological networks. We demonstrate the interoperability of models via SBML qual through the analysis of a specific signalling network by three independent software tools. Furthermore, the collective effort to define the SBML qual format paved the way for the development of LogicalModel, an open-source model library, which will facilitate the adoption of the format as well as the collaborative development of algorithms to analyse qualitative models. CONCLUSIONS: SBML qual allows the exchange of qualitative models among a number of complementary software tools. SBML qual has the potential to promote collaborative work on the development of novel computational approaches, as well as on the specification and the analysis of comprehensive qualitative models of regulatory and signalling networks.
Resumo:
Introduction. Development of the fetal brain surfacewith concomitant gyrification is one of the majormaturational processes of the human brain. Firstdelineated by postmortem studies or by ultrasound, MRIhas recently become a powerful tool for studying in vivothe structural correlates of brain maturation. However,the quantitative measurement of fetal brain developmentis a major challenge because of the movement of the fetusinside the amniotic cavity, the poor spatial resolution,the partial volume effect and the changing appearance ofthe developing brain. Today extensive efforts are made todeal with the âeurooepost-acquisitionâeuro reconstruction ofhigh-resolution 3D fetal volumes based on severalacquisitions with lower resolution (Rousseau, F., 2006;Jiang, S., 2007). We here propose a framework devoted tothe segmentation of the basal ganglia, the gray-whitetissue segmentation, and in turn the 3D corticalreconstruction of the fetal brain. Method. Prenatal MRimaging was performed with a 1-T system (GE MedicalSystems, Milwaukee) using single shot fast spin echo(ssFSE) sequences in fetuses aged from 29 to 32gestational weeks (slice thickness 5.4mm, in planespatial resolution 1.09mm). For each fetus, 6 axialvolumes shifted by 1 mm were acquired (about 1 min pervolume). First, each volume is manually segmented toextract fetal brain from surrounding fetal and maternaltissues. Inhomogeneity intensity correction and linearintensity normalization are then performed. A highspatial resolution image of isotropic voxel size of 1.09mm is created for each fetus as previously published byothers (Rousseau, F., 2006). B-splines are used for thescattered data interpolation (Lee, 1997). Then, basalganglia segmentation is performed on this superreconstructed volume using active contour framework witha Level Set implementation (Bach Cuadra, M., 2010). Oncebasal ganglia are removed from the image, brain tissuesegmentation is performed (Bach Cuadra, M., 2009). Theresulting white matter image is then binarized andfurther given as an input in the Freesurfer software(http://surfer.nmr.mgh.harvard.edu/) to provide accuratethree-dimensional reconstructions of the fetal brain.Results. High-resolution images of the cerebral fetalbrain, as obtained from the low-resolution acquired MRI,are presented for 4 subjects of age ranging from 29 to 32GA. An example is depicted in Figure 1. Accuracy in theautomated basal ganglia segmentation is compared withmanual segmentation using measurement of Dice similarity(DSI), with values above 0.7 considering to be a verygood agreement. In our sample we observed DSI valuesbetween 0.785 and 0.856. We further show the results ofgray-white matter segmentation overlaid on thehigh-resolution gray-scale images. The results arevisually checked for accuracy using the same principlesas commonly accepted in adult neuroimaging. Preliminary3D cortical reconstructions of the fetal brain are shownin Figure 2. Conclusion. We hereby present a completepipeline for the automated extraction of accuratethree-dimensional cortical surface of the fetal brain.These results are preliminary but promising, with theultimate goal to provide âeurooemovieâeuro of the normal gyraldevelopment. In turn, a precise knowledge of the normalfetal brain development will allow the quantification ofsubtle and early but clinically relevant deviations.Moreover, a precise understanding of the gyraldevelopment process may help to build hypotheses tounderstand the pathogenesis of several neurodevelopmentalconditions in which gyrification have been shown to bealtered (e.g. schizophrenia, autismâeuro¦). References.Rousseau, F. (2006), 'Registration-Based Approach forReconstruction of High-Resolution In Utero Fetal MR Brainimages', IEEE Transactions on Medical Imaging, vol. 13,no. 9, pp. 1072-1081. Jiang, S. (2007), 'MRI of MovingSubjects Using Multislice Snapshot Images With VolumeReconstruction (SVR): Application to Fetal, Neonatal, andAdult Brain Studies', IEEE Transactions on MedicalImaging, vol. 26, no. 7, pp. 967-980. Lee, S. (1997),'Scattered data interpolation with multilevel B-splines',IEEE Transactions on Visualization and Computer Graphics,vol. 3, no. 3, pp. 228-244. Bach Cuadra, M. (2010),'Central and Cortical Gray Mater Segmentation of MagneticResonance Images of the Fetal Brain', ISMRM Conference.Bach Cuadra, M. (2009), 'Brain tissue segmentation offetal MR images', MICCAI.
Resumo:
Remote sensing image processing is nowadays a mature research area. The techniques developed in the field allow many real-life applications with great societal value. For instance, urban monitoring, fire detection or flood prediction can have a great impact on economical and environmental issues. To attain such objectives, the remote sensing community has turned into a multidisciplinary field of science that embraces physics, signal theory, computer science, electronics, and communications. From a machine learning and signal/image processing point of view, all the applications are tackled under specific formalisms, such as classification and clustering, regression and function approximation, image coding, restoration and enhancement, source unmixing, data fusion or feature selection and extraction. This paper serves as a survey of methods and applications, and reviews the last methodological advances in remote sensing image processing.
Resumo:
1. The ecological niche is a fundamental biological concept. Modelling species' niches is central to numerous ecological applications, including predicting species invasions, identifying reservoirs for disease, nature reserve design and forecasting the effects of anthropogenic and natural climate change on species' ranges. 2. A computational analogue of Hutchinson's ecological niche concept (the multidimensional hyperspace of species' environmental requirements) is the support of the distribution of environments in which the species persist. Recently developed machine-learning algorithms can estimate the support of such high-dimensional distributions. We show how support vector machines can be used to map ecological niches using only observations of species presence to train distribution models for 106 species of woody plants and trees in a montane environment using up to nine environmental covariates. 3. We compared the accuracy of three methods that differ in their approaches to reducing model complexity. We tested models with independent observations of both species presence and species absence. We found that the simplest procedure, which uses all available variables and no pre-processing to reduce correlation, was best overall. Ecological niche models based on support vector machines are theoretically superior to models that rely on simulating pseudo-absence data and are comparable in empirical tests. 4. Synthesis and applications. Accurate species distribution models are crucial for effective environmental planning, management and conservation, and for unravelling the role of the environment in human health and welfare. Models based on distribution estimation rather than classification overcome theoretical and practical obstacles that pervade species distribution modelling. In particular, ecological niche models based on machine-learning algorithms for estimating the support of a statistical distribution provide a promising new approach to identifying species' potential distributions and to project changes in these distributions as a result of climate change, land use and landscape alteration.