897 resultados para Mesh generation from image data


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Työn tarkoituksena oli tutkia kompleksoituvien metallien erotusta kloridiliuoksesta ioninvaihdolla. Kirjallisessa osassa perehdyttiin metallikompleksien muodostumiseen, ja erityisesti hopean, kalsiumin, magnesiumin, lyijyn ja sinkin muodostamiin komplekseihin kloridin ja nitraatin kanssa. Kirjallisessa osassa käsiteltiin myös metallien erottamista kiintopetikolonneissa jatkuvatoimisilla ioninvaihtomenetelmillä. Tässä työssä jatkuvatoimisen ioninvaihdon prosessivaihtoehdot jaoteltiin pyöriviin ja paikallaan pysyviin kolonneihin, sekä tarkasteltiin eri prosessivaihtoehtoja kolonnien kytkentöjen suhteen. Työn kokeellisessa osassa tutkittiin kahdenarvoisten metallien erottamista yhdenarvoisista metalleista sekä luotiin koedataa vastaavanlaisen erotusprosessin simulointiin. Kokeissa käytettiin anioninvaihtohartsia ja kelatoivaa selektiivistä ioninvaihtohartsia. Kahdenarvoisen kalsiumin, magnesiumin, lyijyn ja sinkin adsorptiota hartseihin tutkittiin tasapaino-, kinetiikka- ja kolonnikokeilla. Anioninvaihtohartsilla tehtyjen tasapaino- ja kolonnikokeiden tulokset osoittivat, että hartsi adsorboi tehokkaasti sinkkiä kloridiliuoksista, koska sinkki muodostaa stabiileja anionisia klorokomplekseja. Muiden tutkittujen kahdenarvoisten metallien adsorptio hartsiin oli huomattavasti vähäisempää. Tulosten perusteella tutkittu anioninvaihtohartsi on hyvä vaihtoehto sinkin erottamiseen muista tutkituista kahdenarvoisista metalleista kloridiympäristössä. Kelatoivalla hartsilla tehdyt tasapaino- ja kolonnikokeet osoittivat, että hartsi adsorboi kloridiliuoksista hyvin kahdenarvoista kalsiumia, magnesiumia, lyijyä ja sinkkiä, mutta ei adsorboi yhdenarvoista hopeaa. Tulosten perusteella kahdenarvoisten metallien erottaminen yhdenarvoisista metalleista voidaan toteuttaa kokeissa käytetyllä kelatoivalla ioninvaihtohartsilla.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The consumption of manganese is increasing, but huge amounts of manganese still end up in waste in hydrometallurgical processes. The recovery of manganese from multi-metal solutions at low concentrations may not be economical. In addition, poor iron control typically prevents the production of high purity manganese. Separation of iron from manganese can be done with chemical precipitation or solvent extraction methods. Combined carbonate precipitation with air oxidation is a feasible method to separate iron and manganese due to the fast kinetics, good controllability and economical reagents. In addition the leaching of manganese carbonate is easier and less acid consuming than that of hydroxide or sulfide precipitates. Selective iron removal with great efficiency from MnSO4 solution is achieved by combined oxygen or air oxidation and CaCO3 precipitation at pH > 5.8 and at a redox potential of > 200 mV. In order to avoid gypsum formation, soda ash should be used instead of limestone. In such case, however, extra attention needs to be paid on the reagents mole ratios in order to avoid manganese coprecipitation. After iron removal, pure MnSO4 solution was obtained by solvent extraction using organophosphorus reagents, di-(2-ethylhexyl)phosphoric acid (D2EHPA) and bis(2,4,4- trimethylpentyl)phosphinic acid (CYANEX 272). The Mn/Ca and Mn/Mg selectivities can be increased by decreasing the temperature from the commonly used temperatures (40 –60oC) to 5oC. The extraction order of D2EHPA (Ca before Mn) at low temperature remains unchanged but the lowering of temperature causes an increase in viscosity and slower phase separation. Of these regents, CYANEX 272 is selective for Mn over Ca and, therefore, it would be the better choice if there is Ca present in solution. A three-stage Mn extraction followed by a two-stage scrubbing and two-stage sulfuric acid stripping is an effective method of producing a very pure MnSO4 intermediate solution for further processing. From the intermediate MnSO4 some special Mn- products for ion exchange applications were synthesized and studied. Three types of octahedrally coordinated manganese oxide materials as an alternative final product for manganese were chosen for synthesis: layer structured Nabirnessite, tunnel structured Mg-todorokite and K-kryptomelane. As an alternative source of pure MnSO4 intermediate, kryptomelane was synthesized by using a synthetic hydrometallurgical tailings. The results show that the studied OMS materials adsorb selectively Cu, Ni, Cd and K in the presence of Ca and Mg. It was also found that the exchange rates were reasonably high due to the small particle dimensions. Materials are stable in the studied conditions and their maximum Cu uptake capacity was 1.3 mmol/g. Competitive uptake of metals and acid was studied using equilibrium, batch kinetic and fixed-bed measurements. The experimental data was correlated with a dynamic model, which also accounts for the dissolution of the framework manganese. Manganese oxide micro-crystals were also bound onto silica to prepare a composite material having a particle size large enough to be used in column separation experiments. The MnOx/SiO2 ratio was found to affect significantly the properties of the composite. The higher the ratio, the lower is the specific surface area, the pore volume and the pore size. On the other hand, higher amount of silica binder gives composites better mechanical properties. Birnesite and todorokite can be aggregated successfully with colloidal silica at pH 4 and with MnO2/SiO2 weight ratio of 0.7. The best gelation and drying temperature was 110oC and sufficiently strong composites were obtained by additional heat-treatment at 250oC for 2 h. The results show that silica–supported MnO2 materials can be utilized to separate copper from nickel and cadmium. The behavior of the composites can be explained reasonably well with the presented model and the parameters estimated from the data of the unsupported oxides. The metal uptake capacities of the prepared materials were quite small. For example, the final copper loading was 0.14 mmol/gMnO2. According to the results the special MnO2 materials are potential for a specific environmental application to uptake harmful metal ions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this study, was studied the biogas generation from swine manure, using residual glycerine supplementation. The biogas production by digestion occurred in the anaerobic batch system under mesophilic conditions (35°C), with a hydraulic retention time of 48 days. The experiment was performed with 48 samples divided into four groups, from these, one was kept as a control (without glycerin) and the other three groups were respectively supplemented with residual glycerine in the percentage of 3%, 6% and 9% of the total volume of the samples. The volume of biogas was controlled by an automated system for reading in laboratory scale and the quality of the biogas (CH4) measured from a specific sensor. The results showed that the residual glycerine has high potential for biogas production, with increases of 124.95%, 156.98% and 197.83% in the groups 3%, 6% and 9%, respectively, relative to the sample control. However, very high organic loads can compromise the process of digestion affecting the quality of the biogas generated in relation to methane.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study aimed to assess the degree of similarity presented by thematic maps generated by different sampling grids of weed plants in a commercial agricultural area of 7.95 hectares. Monocotyledons and dicotyledons were counted on the 2012/2013 and 2013/2014 harvests, before soybean planting, in the fallow period after wheat harvest, in both years. A regular grid of 10 x 10 m was produced to sample the invasive plants, used as reference, and the counting was done in 1 m² of each sample point, totaling 795 samples in each year, compared to regular grids of 30 and 50 m, generated from the data exclusion of the standard grid. Twenty-two composite soil samples were taken at a depth of 0-20 cm to correlate soil properties with weeds occurrence. For the generation of the thematic maps, the Inverse Distance Weighting (IDW) for interpolation was used; when comparing the maps generated from each grid with the reference map, the kappa coefficient was used to assess the loss of quality of the maps as the number of sample points was reduced. It was observed that the map quality loss was lower in 2013 compared to 2012 when the sampling density of the points was reduced. The 30 x 30 m grids have satisfactorily described the infestation data of the dicotyledons and the 50 x 50 m grids have adequately described the monocotyledon weeds infestation, compared to the standard 10 x 10 m grids.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Presentation at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Identification of low-dimensional structures and main sources of variation from multivariate data are fundamental tasks in data analysis. Many methods aimed at these tasks involve solution of an optimization problem. Thus, the objective of this thesis is to develop computationally efficient and theoretically justified methods for solving such problems. Most of the thesis is based on a statistical model, where ridges of the density estimated from the data are considered as relevant features. Finding ridges, that are generalized maxima, necessitates development of advanced optimization methods. An efficient and convergent trust region Newton method for projecting a point onto a ridge of the underlying density is developed for this purpose. The method is utilized in a differential equation-based approach for tracing ridges and computing projection coordinates along them. The density estimation is done nonparametrically by using Gaussian kernels. This allows application of ridge-based methods with only mild assumptions on the underlying structure of the data. The statistical model and the ridge finding methods are adapted to two different applications. The first one is extraction of curvilinear structures from noisy data mixed with background clutter. The second one is a novel nonlinear generalization of principal component analysis (PCA) and its extension to time series data. The methods have a wide range of potential applications, where most of the earlier approaches are inadequate. Examples include identification of faults from seismic data and identification of filaments from cosmological data. Applicability of the nonlinear PCA to climate analysis and reconstruction of periodic patterns from noisy time series data are also demonstrated. Other contributions of the thesis include development of an efficient semidefinite optimization method for embedding graphs into the Euclidean space. The method produces structure-preserving embeddings that maximize interpoint distances. It is primarily developed for dimensionality reduction, but has also potential applications in graph theory and various areas of physics, chemistry and engineering. Asymptotic behaviour of ridges and maxima of Gaussian kernel densities is also investigated when the kernel bandwidth approaches infinity. The results are applied to the nonlinear PCA and to finding significant maxima of such densities, which is a typical problem in visual object tracking.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The present report describes the development of a technique for automatic wheezing recognition in digitally recorded lung sounds. This method is based on the extraction and processing of spectral information from the respiratory cycle and the use of these data for user feedback and automatic recognition. The respiratory cycle is first pre-processed, in order to normalize its spectral information, and its spectrogram is then computed. After this procedure, the spectrogram image is processed by a two-dimensional convolution filter and a half-threshold in order to increase the contrast and isolate its highest amplitude components, respectively. Thus, in order to generate more compressed data to automatic recognition, the spectral projection from the processed spectrogram is computed and stored as an array. The higher magnitude values of the array and its respective spectral values are then located and used as inputs to a multi-layer perceptron artificial neural network, which results an automatic indication about the presence of wheezes. For validation of the methodology, lung sounds recorded from three different repositories were used. The results show that the proposed technique achieves 84.82% accuracy in the detection of wheezing for an isolated respiratory cycle and 92.86% accuracy for the detection of wheezes when detection is carried out using groups of respiratory cycles obtained from the same person. Also, the system presents the original recorded sound and the post-processed spectrogram image for the user to draw his own conclusions from the data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The classical renin-angiotensin system (RAS) consists of enzymes and peptides that regulate blood pressure and electrolyte and fluid homeostasis. Angiotensin II (Ang II) is one of the most important and extensively studied components of the RAS. The beneficial effects of angiotensin converting enzyme (ACE) inhibitors in the treatment of hypertension and heart failure, among other diseases, are well known. However, it has been reported that patients chronically treated with effective doses of these inhibitors do not show suppression of Ang II formation, suggesting the involvement of pathways alternative to ACE in the generation of Ang II. Moreover, the finding that the concentration of Ang II is preserved in the kidney, heart and lungs of mice with an ACE deletion indicates the important role of alternative pathways under basal conditions to maintain the levels of Ang II. Our group has characterized the serine protease elastase-2 as an alternative pathway for Ang II generation from Ang I in rats. A role for elastase-2 in the cardiovascular system was suggested by studies performed in heart and conductance and resistance vessels of normotensive and spontaneously hypertensive rats. This mini-review will highlight the pharmacological aspects of the RAS, emphasizing the role of elastase-2, an alternative pathway for Ang II generation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Personalized medicine will revolutionize our capabilities to combat disease. Working toward this goal, a fundamental task is the deciphering of geneticvariants that are predictive of complex diseases. Modern studies, in the formof genome-wide association studies (GWAS) have afforded researchers with the opportunity to reveal new genotype-phenotype relationships through the extensive scanning of genetic variants. These studies typically contain over half a million genetic features for thousands of individuals. Examining this with methods other than univariate statistics is a challenging task requiring advanced algorithms that are scalable to the genome-wide level. In the future, next-generation sequencing studies (NGS) will contain an even larger number of common and rare variants. Machine learning-based feature selection algorithms have been shown to have the ability to effectively create predictive models for various genotype-phenotype relationships. This work explores the problem of selecting genetic variant subsets that are the most predictive of complex disease phenotypes through various feature selection methodologies, including filter, wrapper and embedded algorithms. The examined machine learning algorithms were demonstrated to not only be effective at predicting the disease phenotypes, but also doing so efficiently through the use of computational shortcuts. While much of the work was able to be run on high-end desktops, some work was further extended so that it could be implemented on parallel computers helping to assure that they will also scale to the NGS data sets. Further, these studies analyzed the relationships between various feature selection methods and demonstrated the need for careful testing when selecting an algorithm. It was shown that there is no universally optimal algorithm for variant selection in GWAS, but rather methodologies need to be selected based on the desired outcome, such as the number of features to be included in the prediction model. It was also demonstrated that without proper model validation, for example using nested cross-validation, the models can result in overly-optimistic prediction accuracies and decreased generalization ability. It is through the implementation and application of machine learning methods that one can extract predictive genotype–phenotype relationships and biological insights from genetic data sets.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study investigated, retrospectively, whether recidivism in a sample of court-ordered'graduates of an alcohol education and awareness program could be predicted. This alcohol education program was based on adult education principles and was philosophically akin to the thoughts of Drs. Jack Mezirow, Stephen Brookfield, and Patricia Cranton. Data on the sample of 214 Halton IDEA (Impaired Driver Education and Awareness) graduates were entered into a spread sheet. Descriptive statistics were generated. Each of the 214 program graduates had taken several tests during the course of the IDEA program. These tests measured knowledge, attitude about impaired driving, and degree of alcohol involvement. Test scores were analyzed to determine whether those IDEA graduates who recidivated differed in any measurable way from those who had no further criminal convictions after a period of at least three years. Their criminal records were obtained from the Canadian Police Information Centre (CPIC). Those program graduates who reoffended were compared to the vast majority who did not reoffend. Results of the study indicated that there was no way to determine who would recidivate from the data that were collected. Further studies could use a qualitative model. Follow-up interviews could be used to determine what impact, if any, attendance at the IDEA program had on the life of the graduates.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This research identified and explored the various responses often women Registered Nurses displaced from full-time elnployment as staff nurses in general hospitals in southern Ontario. These nurses were among the hundreds in Ontario who were displaced between October 1991 and October 1995 as a result of organizational downsizing and other health care reform initiatives. The purpose ofthis research was to document tIle responses of nurses to job displacement, and how that experience impacted on a nurse's professional identity and her understanding of the nature and utilization of nursing labour. This study incorporated techniques consistent with the principles of naturalistic inquiry and the narrative tradition. A purposive sample was drawn from the Health Sector Training and Adjustment Program database. Data collection and analysis was a three-step process wherein the data collection in each step was informed by the data analysis in the preceding step. The main technique used for qualitative data collection was semistructured, individual and group interviews. Emerging from the data was a rich and textured story ofhow job displacement disrupted the meaningful connections nurses had with their work. In making meaning of this change, displaced nurses journeyed along a three-step path toward labour adjustment. Structural analysis was the interpretive lens used to view the historical, sociopolitical and ideological forces which constrained the choices reasonably available to displaced nurses while Kelly's personal construct theory was the lens used to view the process of making choices and reconstruing their professional identity.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This research identified and explored the various responses of ten women Registered Nurses displaced from full-time employment as staff nurses in general hospitals in southern Ontario. These nurses were among the hundreds in Ontario who were displaced between October 1991 and October 1995 as a result of organizational downsizing and other health care reform initiatives. The purpose of this research was to document the responses of nurses to job displacement, and how that experience impacted on a nurse's professional identity and her understanding of the nature and utilization of nursing labour. This study incorporated techniques consistent with the principles of naturalistic inquiry and the narrative tradition. A purposive sample was drawn from the Health Sector Training and Adjustment Program database. Data collection and analysis was a three-step process wherein the data collection in each step was informed by the data analysis in the preceding step. The main technique used for qualitative data collection was semistructured, individual and group interviews. Emerging from the data was a rich and textured story of how job displacement disrupted the meaningful connections nurses had with their work. In making meaning of this change, displaced nurses journeyed along a three-step path toward labour adjustment. Structural analysis was the interpretive lens used to view the historical, sociopolitical and ideological forces which constrained the choices reasonably available to displaced nurses while Kelly's personal construct theory was the lens used to view the process of making choices and reconstruing their professional identity.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Comme dans la plupart des pays francophones d’Afrique Subsaharienne, la question du vieillissement ou la situation des personnes âgées ne sont pas encore d’actualité au Niger, principalement à cause de la jeunesse de sa population d’une part et d’un intérêt plus porté sur les enfants, les adolescents et les mères d’autre part. Pourtant le Niger fait face à une crise économique sans précédent qui peut avoir des conséquences néfastes sur les conditions de vie des personnes âgées. D’un côté, selon la coutume, la personne âgée (généralement grand parent) s’occupe principalement des petits enfants (orphelins ou non) qui lui sont confiés par leurs parents vivant dans la même localité ou ailleurs, ou qui sont décédés. De l’autre, l’absence d’un jeune adulte dans un ménage où vit au moins une personne âgée est considérée comme un phénomène social préoccupant dans les pays à forte prévalence de VIH/SIDA. Le Niger fait partie des pays où la proportion des personnes âgées vivant avec des petits enfants en l’absence de leurs parents adultes est la plus élevée. Cependant, malgré une forte mortalité adulte, l’absence de données fiables ne permet pas de le classer parmi les pays à forte mortalité adulte due au VIH/SIDA. La raison de cette situation est donc à chercher dans les différences individuelles et communautaires. Jusqu’au début des années 1990, la plupart des études sur les personnes âgées réalisées en Afrique Subsaharienne étaient basées sur les études qualitatives, tandis que les plus récentes sont faites à partir des données des recensements ou enquêtes sociodémoraphiques et économiques. Les conditions de vie des personnes âgées et les conséquences de la pauvreté et du VIH/SIDA sur celles-ci sont les principaux thèmes jusque-là couverts à l’aide des données existantes. Mais, il manque encore de données longitudinales essentielles à l’analyse de certains aspects du cycle de vie des personnes âgées. L’étude n’étant pas sociologique, c’est à l’aide de données démographiques quantitatives, plus précisément le recensement général de la population, que nous tenterons d’expliquer le phénomène sur une base exploratoire. L’analyse au niveau individuel a été faite à l’aide de la régression logistique sous STATA, tandis qu’au niveau contextuel, nous avons utilisé l’analyse multiniveau à l’aide du logiciel HLM (version 6.0). Les résultats indiquent que la vie en l’absence d’un jeune adulte et dans un ménage à génération coupée dépendent principalement du statut sociodémographique de la personne âgée au Niger. Par exemple, il ressort que le mariage avantage l’homme âgé, tandis que le veuvage l’isole plus que la femme âgée. Au niveau contextuel, ce sont les facteurs socioéconomiques qui influencent les conditions de vie des personnes âgées. L’étude montre, en effet, que le degré d’urbanisation d’une commune augmente le risque d’isolement d’une personne âgée qui y réside, alors que le niveau de pauvreté le réduit. Toutefois, nos résultats sont à prendre avec prudence parce qu’en premier lieu il n’existe pas d’études références sur le sujet tant au Niger que dans la sous-région d’Afrique francophone sahélienne. Ensuite, parce que le phénomène étudié pourrait être mesuré de plusieurs manières en fonction du contexte et des données disponibles, et que l’analyse approfondie des effets du statut matrimonial nécessiterait une plus grande connaissance du phénomène chez les personnes âgées. Enfin, compte tenu de la faible prévalence du VIH/SIDA au Niger, les principaux facteurs explicatifs de la vie dans un ménage à génération coupée (aussi bien pour les personnes âgées que pour les enfants) pourraient être le confiage des enfants ou la mortalité adulte due aux autres causes telles que le paludisme, la tuberculose et les maladies infectieuses. Toutefois, l’absence d’informations relatives à ces aspects dans les données utilisées n’a pas permis de les intégrer dans notre étude. Ainsi, compte tenu de la difficulté d’appréhender les contours du phénomène, les futurs programmes en faveur des personnes âgées au Niger et en Afrique Subsaharienne francophone doivent se baser sur des études concrètes relatives aux dimensions sociale et économique du phénomène. Mots clés : Niger - personnes âgées - conditions de vie - mode de vie - cohabitation intergénérationnelle - études comparatives - absence d’un jeune adulte - ménage à génération coupée - Afrique.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Alors que certains mécanismes pourtant jugés cruciaux pour la transformation de la pluie en débit restent peu ou mal compris, le concept de connectivité hydrologique a récemment été proposé pour expliquer pourquoi certains processus sont déclenchés de manière épisodique en fonction des caractéristiques des événements de pluie et de la teneur en eau des sols avant l’événement. L’adoption de ce nouveau concept en hydrologie reste cependant difficile puisqu’il n’y a pas de consensus sur la définition de la connectivité, sa mesure, son intégration dans les modèles hydrologiques et son comportement lors des transferts d’échelles spatiales et temporelles. Le but de ce travail doctoral est donc de préciser la définition, la mesure, l’agrégation et la prédiction des processus liés à la connectivité hydrologique en s’attardant aux questions suivantes : 1) Quel cadre méthodologique adopter pour une étude sur la connectivité hydrologique ?, 2) Comment évaluer le degré de connectivité hydrologique des bassins versants à partir de données de terrain ?, et 3) Dans quelle mesure nos connaissances sur la connectivité hydrologique doivent-elles conduire à la modification des postulats de modélisation hydrologique ? Trois approches d’étude sont différenciées, soit i) une approche de type « boite noire », basée uniquement sur l’exploitation des données de pluie et de débits sans examiner le fonctionnement interne du bassin versant ; ii) une approche de type « boite grise » reposant sur l’étude de données géochimiques ponctuelles illustrant la dynamique interne du bassin versant ; et iii) une approche de type « boite blanche » axée sur l’analyse de patrons spatiaux exhaustifs de la topographie de surface, la topographie de subsurface et l’humidité du sol. Ces trois approches sont ensuite validées expérimentalement dans le bassin versant de l’Hermine (Basses Laurentides, Québec). Quatre types de réponses hydrologiques sont distingués en fonction de leur magnitude et de leur synchronisme, sachant que leur présence relative dépend des conditions antécédentes. Les forts débits enregistrés à l’exutoire du bassin versant sont associés à une contribution accrue de certaines sources de ruissellement, ce qui témoigne d’un lien hydraulique accru et donc d’un fort degré de connectivité hydrologique entre les sources concernées et le cours d’eau. Les aires saturées couvrant des superficies supérieures à 0,85 ha sont jugées critiques pour la genèse de forts débits de crue. La preuve est aussi faite que les propriétés statistiques des patrons d’humidité du sol en milieu forestier tempéré humide sont nettement différentes de celles observées en milieu de prairie tempéré sec, d’où la nécessité d’utiliser des méthodes de calcul différentes pour dériver des métriques spatiales de connectivité dans les deux types de milieux. Enfin, la double existence de sources contributives « linéaires » et « non linéaires » est mise en évidence à l’Hermine. Ces résultats suggèrent la révision de concepts qui sous-tendent l’élaboration et l’exécution des modèles hydrologiques. L’originalité de cette thèse est le fait même de son sujet. En effet, les objectifs de recherche poursuivis sont conformes à la théorie hydrologique renouvelée qui prône l’arrêt des études de particularismes de petite échelle au profit de l’examen des propriétés émergentes des bassins versants telles que la connectivité hydrologique. La contribution majeure de cette thèse consiste ainsi en la proposition d’une définition unifiée de la connectivité, d’un cadre méthodologique, d’approches de mesure sur le terrain, d’outils techniques et de pistes de solution pour la modélisation des systèmes hydrologiques.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

L'apprentissage profond est un domaine de recherche en forte croissance en apprentissage automatique qui est parvenu à des résultats impressionnants dans différentes tâches allant de la classification d'images à la parole, en passant par la modélisation du langage. Les réseaux de neurones récurrents, une sous-classe d'architecture profonde, s'avèrent particulièrement prometteurs. Les réseaux récurrents peuvent capter la structure temporelle dans les données. Ils ont potentiellement la capacité d'apprendre des corrélations entre des événements éloignés dans le temps et d'emmagasiner indéfiniment des informations dans leur mémoire interne. Dans ce travail, nous tentons d'abord de comprendre pourquoi la profondeur est utile. Similairement à d'autres travaux de la littérature, nos résultats démontrent que les modèles profonds peuvent être plus efficaces pour représenter certaines familles de fonctions comparativement aux modèles peu profonds. Contrairement à ces travaux, nous effectuons notre analyse théorique sur des réseaux profonds acycliques munis de fonctions d'activation linéaires par parties, puisque ce type de modèle est actuellement l'état de l'art dans différentes tâches de classification. La deuxième partie de cette thèse porte sur le processus d'apprentissage. Nous analysons quelques techniques d'optimisation proposées récemment, telles l'optimisation Hessian free, la descente de gradient naturel et la descente des sous-espaces de Krylov. Nous proposons le cadre théorique des méthodes à région de confiance généralisées et nous montrons que plusieurs de ces algorithmes développés récemment peuvent être vus dans cette perspective. Nous argumentons que certains membres de cette famille d'approches peuvent être mieux adaptés que d'autres à l'optimisation non convexe. La dernière partie de ce document se concentre sur les réseaux de neurones récurrents. Nous étudions d'abord le concept de mémoire et tentons de répondre aux questions suivantes: Les réseaux récurrents peuvent-ils démontrer une mémoire sans limite? Ce comportement peut-il être appris? Nous montrons que cela est possible si des indices sont fournis durant l'apprentissage. Ensuite, nous explorons deux problèmes spécifiques à l'entraînement des réseaux récurrents, à savoir la dissipation et l'explosion du gradient. Notre analyse se termine par une solution au problème d'explosion du gradient qui implique de borner la norme du gradient. Nous proposons également un terme de régularisation conçu spécifiquement pour réduire le problème de dissipation du gradient. Sur un ensemble de données synthétique, nous montrons empiriquement que ces mécanismes peuvent permettre aux réseaux récurrents d'apprendre de façon autonome à mémoriser des informations pour une période de temps indéfinie. Finalement, nous explorons la notion de profondeur dans les réseaux de neurones récurrents. Comparativement aux réseaux acycliques, la définition de profondeur dans les réseaux récurrents est souvent ambiguë. Nous proposons différentes façons d'ajouter de la profondeur dans les réseaux récurrents et nous évaluons empiriquement ces propositions.