950 resultados para 080403 Data Structures
Resumo:
This thesis develops a comprehensive and a flexible statistical framework for the analysis and detection of space, time and space-time clusters of environmental point data. The developed clustering methods were applied in both simulated datasets and real-world environmental phenomena; however, only the cases of forest fires in Canton of Ticino (Switzerland) and in Portugal are expounded in this document. Normally, environmental phenomena can be modelled as stochastic point processes where each event, e.g. the forest fire ignition point, is characterised by its spatial location and occurrence in time. Additionally, information such as burned area, ignition causes, landuse, topographic, climatic and meteorological features, etc., can also be used to characterise the studied phenomenon. Thereby, the space-time pattern characterisa- tion represents a powerful tool to understand the distribution and behaviour of the events and their correlation with underlying processes, for instance, socio-economic, environmental and meteorological factors. Consequently, we propose a methodology based on the adaptation and application of statistical and fractal point process measures for both global (e.g. the Morisita Index, the Box-counting fractal method, the multifractal formalism and the Ripley's K-function) and local (e.g. Scan Statistics) analysis. Many measures describing the space-time distribution of environmental phenomena have been proposed in a wide variety of disciplines; nevertheless, most of these measures are of global character and do not consider complex spatial constraints, high variability and multivariate nature of the events. Therefore, we proposed an statistical framework that takes into account the complexities of the geographical space, where phenomena take place, by introducing the Validity Domain concept and carrying out clustering analyses in data with different constrained geographical spaces, hence, assessing the relative degree of clustering of the real distribution. Moreover, exclusively to the forest fire case, this research proposes two new methodologies to defining and mapping both the Wildland-Urban Interface (WUI) described as the interaction zone between burnable vegetation and anthropogenic infrastructures, and the prediction of fire ignition susceptibility. In this regard, the main objective of this Thesis was to carry out a basic statistical/- geospatial research with a strong application part to analyse and to describe complex phenomena as well as to overcome unsolved methodological problems in the characterisation of space-time patterns, in particular, the forest fire occurrences. Thus, this Thesis provides a response to the increasing demand for both environmental monitoring and management tools for the assessment of natural and anthropogenic hazards and risks, sustainable development, retrospective success analysis, etc. The major contributions of this work were presented at national and international conferences and published in 5 scientific journals. National and international collaborations were also established and successfully accomplished. -- Cette thèse développe une méthodologie statistique complète et flexible pour l'analyse et la détection des structures spatiales, temporelles et spatio-temporelles de données environnementales représentées comme de semis de points. Les méthodes ici développées ont été appliquées aux jeux de données simulées autant qu'A des phénomènes environnementaux réels; nonobstant, seulement le cas des feux forestiers dans le Canton du Tessin (la Suisse) et celui de Portugal sont expliqués dans ce document. Normalement, les phénomènes environnementaux peuvent être modélisés comme des processus ponctuels stochastiques ou chaque événement, par ex. les point d'ignition des feux forestiers, est déterminé par son emplacement spatial et son occurrence dans le temps. De plus, des informations tels que la surface bru^lée, les causes d'ignition, l'utilisation du sol, les caractéristiques topographiques, climatiques et météorologiques, etc., peuvent aussi être utilisées pour caractériser le phénomène étudié. Par conséquent, la définition de la structure spatio-temporelle représente un outil puissant pour compren- dre la distribution du phénomène et sa corrélation avec des processus sous-jacents tels que les facteurs socio-économiques, environnementaux et météorologiques. De ce fait, nous proposons une méthodologie basée sur l'adaptation et l'application de mesures statistiques et fractales des processus ponctuels d'analyse global (par ex. l'indice de Morisita, la dimension fractale par comptage de boîtes, le formalisme multifractal et la fonction K de Ripley) et local (par ex. la statistique de scan). Des nombreuses mesures décrivant les structures spatio-temporelles de phénomènes environnementaux peuvent être trouvées dans la littérature. Néanmoins, la plupart de ces mesures sont de caractère global et ne considèrent pas de contraintes spatiales com- plexes, ainsi que la haute variabilité et la nature multivariée des événements. A cet effet, la méthodologie ici proposée prend en compte les complexités de l'espace géographique ou le phénomène a lieu, à travers de l'introduction du concept de Domaine de Validité et l'application des mesures d'analyse spatiale dans des données en présentant différentes contraintes géographiques. Cela permet l'évaluation du degré relatif d'agrégation spatiale/temporelle des structures du phénomène observé. En plus, exclusif au cas de feux forestiers, cette recherche propose aussi deux nouvelles méthodologies pour la définition et la cartographie des zones périurbaines, décrites comme des espaces anthropogéniques à proximité de la végétation sauvage ou de la forêt, et de la prédiction de la susceptibilité à l'ignition de feu. A cet égard, l'objectif principal de cette Thèse a été d'effectuer une recherche statistique/géospatiale avec une forte application dans des cas réels, pour analyser et décrire des phénomènes environnementaux complexes aussi bien que surmonter des problèmes méthodologiques non résolus relatifs à la caractérisation des structures spatio-temporelles, particulièrement, celles des occurrences de feux forestières. Ainsi, cette Thèse fournit une réponse à la demande croissante de la gestion et du monitoring environnemental pour le déploiement d'outils d'évaluation des risques et des dangers naturels et anthro- pogéniques. Les majeures contributions de ce travail ont été présentées aux conférences nationales et internationales, et ont été aussi publiées dans 5 revues internationales avec comité de lecture. Des collaborations nationales et internationales ont été aussi établies et accomplies avec succès.
Resumo:
Phase encoded nano structures such as Quick Response (QR) codes made of metallic nanoparticles are suggested to be used in security and authentication applications. We present a polarimetric optical method able to authenticate random phase encoded QR codes. The system is illuminated using polarized light and the QR code is encoded using a phase-only random mask. Using classification algorithms it is possible to validate the QR code from the examination of the polarimetric signature of the speckle pattern. We used Kolmogorov-Smirnov statistical test and Support Vector Machine algorithms to authenticate the phase encoded QR codes using polarimetric signatures.
Resumo:
This thesis studies the properties and usability of operators called t-norms, t-conorms, uninorms, as well as many valued implications and equivalences. Into these operators, weights and a generalized mean are embedded for aggregation, and they are used for comparison tasks and for this reason they are referred to as comparison measures. The thesis illustrates how these operators can be weighted with a differential evolution and aggregated with a generalized mean, and the kinds of measures of comparison that can be achieved from this procedure. New operators suitable for comparison measures are suggested. These operators are combination measures based on the use of t-norms and t-conorms, the generalized 3_-uninorm and pseudo equivalence measures based on S-type implications. The empirical part of this thesis demonstrates how these new comparison measures work in the field of classification, for example, in the classification of medical data. The second application area is from the field of sports medicine and it represents an expert system for defining an athlete's aerobic and anaerobic thresholds. The core of this thesis offers definitions for comparison measures and illustrates that there is no actual difference in the results achieved in comparison tasks, by the use of comparison measures based on distance, versus comparison measures based on many valued logical structures. The approach has been highly practical in this thesis and all usage of the measures has been validated mainly by practical testing. In general, many different types of operators suitable for comparison tasks have been presented in fuzzy logic literature and there has been little or no experimental work with these operators.
Resumo:
This article carries out an empirical examination of the origin of the differences between immigrant and native-born wage structures in the Spanish labour market. Especial attention is given in the analysis to the role played by occupational and workplace segregation of immigrants. Legal immigrants from developing countries exhibit lower mean wages and a more compressed wage structure than native-born workers. By contrast, immigrants from developed countries display higher mean wages and a more dispersed wage structure. The main empirical finding is that the disparities in the wage distributions for the native-born and both groups of immigrants are largely explained by their different observed characteristics, with a particularly important influence in this context of workplace and, particularly, occupational segregation.
Resumo:
Recent years have produced great advances in the instrumentation technology. The amount of available data has been increasing due to the simplicity, speed and accuracy of current spectroscopic instruments. Most of these data are, however, meaningless without a proper analysis. This has been one of the reasons for the overgrowing success of multivariate handling of such data. Industrial data is commonly not designed data; in other words, there is no exact experimental design, but rather the data have been collected as a routine procedure during an industrial process. This makes certain demands on the multivariate modeling, as the selection of samples and variables can have an enormous effect. Common approaches in the modeling of industrial data are PCA (principal component analysis) and PLS (projection to latent structures or partial least squares) but there are also other methods that should be considered. The more advanced methods include multi block modeling and nonlinear modeling. In this thesis it is shown that the results of data analysis vary according to the modeling approach used, thus making the selection of the modeling approach dependent on the purpose of the model. If the model is intended to provide accurate predictions, the approach should be different than in the case where the purpose of modeling is mostly to obtain information about the variables and the process. For industrial applicability it is essential that the methods are robust and sufficiently simple to apply. In this way the methods and the results can be compared and an approach selected that is suitable for the intended purpose. Differences in data analysis methods are compared with data from different fields of industry in this thesis. In the first two papers, the multi block method is considered for data originating from the oil and fertilizer industries. The results are compared to those from PLS and priority PLS. The third paper considers applicability of multivariate models to process control for a reactive crystallization process. In the fourth paper, nonlinear modeling is examined with a data set from the oil industry. The response has a nonlinear relation to the descriptor matrix, and the results are compared between linear modeling, polynomial PLS and nonlinear modeling using nonlinear score vectors.
Resumo:
Especially in global enterprises, key data is fragmented in multiple Enterprise Resource Planning (ERP) systems. Thus the data is inconsistent, fragmented and redundant across the various systems. Master Data Management (MDM) is a concept, which creates cross-references between customers, suppliers and business units, and enables corporate hierarchies and structures. The overall goal for MDM is the ability to create an enterprise-wide consistent data model, which enables analyzing and reporting customer and supplier data. The goal of the study was defining the properties and success factors of a master data system. The theoretical background was based on literature and the case consisted of enterprise specific needs and demands. The theoretical part presents the concept, background, and principles of MDM and then the phases of system planning and implementation project. Case consists of background, definition of as is situation, definition of project, evaluation criterions and concludes the key results of the thesis. In the end chapter Conclusions combines common principles with the results of the case. The case part ended up dividing important factors of the system in success factors, technical requirements and business benefits. To clarify the project and find funding for the project, business benefits have to be defined and the realization has to be monitored. The thesis found out six success factors for the MDM system: Well defined business case, data management and monitoring, data models and structures defined and maintained, customer and supplier data governance, delivery and quality, commitment, and continuous communication with business. Technical requirements emerged several times during the thesis and therefore those can’t be ignored in the project. Conclusions chapter goes through these factors on a general level. The success factors and technical requirements are related to the essentials of MDM: Governance, Action and Quality. This chapter could be used as guidance in a master data management project.
Resumo:
Poster at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014
Resumo:
We determined whether ANP (atrial natriuretic peptide) concentrations, measured by radioimmunoassay, in the ANPergic cerebral regions involved in regulation of sodium intake and excretion and pituitary gland correlated with differences in sodium preference among 40 Wistar male rats (180-220 g). Sodium preference was measured as mean spontaneous ingestion of 1.5% NaCl solution during a test period of 12 days. The relevant tissues included the olfactory bulb (OB), the posterior and anterior lobes of the pituitary gland (PP and AP, respectively), the median eminence (ME), the medial basal hypothalamus (MBH), and the region anteroventral to the third ventricle (AV3V). We also measured ANP content in the right (RA) and left atrium (LA) and plasma. The concentrations of ANP in the OB and the AP were correlated with sodium ingestion during the preceding 24 h, since an increase of ANP in these structures was associated with a reduced ingestion and vice-versa (OB: r = -0.3649, P<0.05; AP: r = -0.3291, P<0.05). Moreover, the AP exhibited a correlation between ANP concentration and mean NaCl intake (r = -0.4165, P<0.05), but this was not the case for the OB (r = 0.2422). This suggests that differences in sodium preference among individual male rats can be related to variations of AP ANP level. Earlier studies indicated that the OB is involved in the control of NaCl ingestion. Our data suggest that the OB ANP level may play a role mainly in day-to-day variations of sodium ingestion in the individual rat
Resumo:
Identification of low-dimensional structures and main sources of variation from multivariate data are fundamental tasks in data analysis. Many methods aimed at these tasks involve solution of an optimization problem. Thus, the objective of this thesis is to develop computationally efficient and theoretically justified methods for solving such problems. Most of the thesis is based on a statistical model, where ridges of the density estimated from the data are considered as relevant features. Finding ridges, that are generalized maxima, necessitates development of advanced optimization methods. An efficient and convergent trust region Newton method for projecting a point onto a ridge of the underlying density is developed for this purpose. The method is utilized in a differential equation-based approach for tracing ridges and computing projection coordinates along them. The density estimation is done nonparametrically by using Gaussian kernels. This allows application of ridge-based methods with only mild assumptions on the underlying structure of the data. The statistical model and the ridge finding methods are adapted to two different applications. The first one is extraction of curvilinear structures from noisy data mixed with background clutter. The second one is a novel nonlinear generalization of principal component analysis (PCA) and its extension to time series data. The methods have a wide range of potential applications, where most of the earlier approaches are inadequate. Examples include identification of faults from seismic data and identification of filaments from cosmological data. Applicability of the nonlinear PCA to climate analysis and reconstruction of periodic patterns from noisy time series data are also demonstrated. Other contributions of the thesis include development of an efficient semidefinite optimization method for embedding graphs into the Euclidean space. The method produces structure-preserving embeddings that maximize interpoint distances. It is primarily developed for dimensionality reduction, but has also potential applications in graph theory and various areas of physics, chemistry and engineering. Asymptotic behaviour of ridges and maxima of Gaussian kernel densities is also investigated when the kernel bandwidth approaches infinity. The results are applied to the nonlinear PCA and to finding significant maxima of such densities, which is a typical problem in visual object tracking.
Resumo:
The bedrock of old crystalline cratons is characteristically saturated with brittle structures formed during successive superimposed episodes of deformation and under varying stress regimes. As a result, the crust effectively deforms through the reactivation of pre-existing structures rather than by through the activation, or generation, of new ones, and is said to be in a state of 'structural maturity'. By combining data from Olkiluoto Island, southwestern Finland, which has been investigated as the potential site of a deep geological repository for high-level nuclear waste, with observations from southern Sweden, it can be concluded that the southern part of the Svecofennian shield had already attained structural maturity during the Mesoproterozoic era. This indicates that the phase of activation of the crust, i.e. the time interval during which new fractures were generated, was brief in comparison to the subsequent reactivation phase. Structural maturity of the bedrock was also attained relatively rapidly in Namaqualand, western South Africa, after the formation of first brittle structures during Neoproterozoic time. Subsequent brittle deformation in Namaqualand was controlled by the reactivation of pre-existing strike-slip faults.In such settings, seismic events are likely to occur through reactivation of pre-existing zones that are favourably oriented with respect to prevailing stresses. In Namaqualand, this is shown for present day seismicity by slip tendency analysis, and at Olkiluoto, for a Neoproterozoic earthquake reactivating a Mesoproterozoic fault. By combining detailed field observations with the results of paleostress inversions and relative and absolute time constraints, seven distinctm superimposed paleostress regimes have been recognized in the Olkiluoto region. From oldest to youngest these are: (1) NW-SE to NNW-SSE transpression, which prevailed soon after 1.75 Ga, when the crust had sufficiently cooled down to allow brittle deformation to occur. During this phase conjugate NNW-SSE and NE-SW striking strike-slip faults were active simultaneous with reactivation of SE-dipping low-angle shear zones and foliation planes. This was followed by (2) N-S to NE-SW transpression, which caused partial reactivation of structures formed in the first event; (3) NW-SE extension during the Gothian orogeny and at the time of rapakivi magmatism and intrusion of diabase dikes; (4) NE-SW transtension that occurred between 1.60 and 1.30 Ga and which also formed the NW-SE-trending Satakunta graben located some 20 km north of Olkiluoto. Greisen-type veins also formed during this phase. (5) NE-SW compression that postdates both the formation of the 1.56 Ga rapakivi granites and 1.27 Ga olivine diabases of the region; (6) E-W transpression during the early stages of the Mesoproterozoic Sveconorwegian orogeny and which also predated (7) almost coaxial E-W extension attributed to the collapse of the Sveconorwegian orogeny. The kinematic analysis of fracture systems in crystalline bedrock also provides a robust framework for evaluating fluid-rock interaction in the brittle regime; this is essential in assessment of bedrock integrity for numerous geo-engineering applications, including groundwater management, transient or permanent CO2 storage and site investigations for permanent waste disposal. Investigations at Olkiluoto revealed that fluid flow along fractures is coupled with low normal tractions due to in-situ stresses and thus deviates from the generally accepted critically stressed fracture concept, where fluid flow is concentrated on fractures on the verge of failure. The difference is linked to the shallow conditions of Olkiluoto - due to the low differential stresses inherent at shallow depths, fracture activation and fluid flow is controlled by dilation due to low normal tractions. At deeper settings, however, fluid flow is controlled by fracture criticality caused by large differential stress, which drives shear deformation instead of dilation.
Resumo:
In the global phenomenon, the aging population becomes a critical issue. Data and information concerning elderly citizens are increasing and are not well organized. In addition, these unstructured data and information cause the problems for decision makers. Since we live in a digital world, Information Technology is considered to be a tool in order to solve problems. Data, information, and knowledge are crucial components to facilitate success in IT service system. Therefore, it is necessary to study how to organize or to govern data from various sources related elderly citizens. The research is conducted due to the fact that there is no internationally accepted holistic framework for governance of data. The research limits the scope to study on the healthcare domain; however, the results can be applied to the other areas. The research starts with an ongoing research of Dahlberg and Nokkala (2015) as a theory. It explains the classification of existing data sources and their characteristics with the focus on managerial perspectives. Then the studies of existing frameworks at international and national level organizations have been performed to show the current frameworks, which have been used and are useful in compiling data on elderly citizens. The international organizations in this research are selected based on their reputations and the reliability to obtain information. The selected countries at national level provide different point of views between two countries. Australia is a forerunner in IT governance while Thailand is the country which the author has familiar knowledge of the current situation. Considered the discussions of frameworks at international and national organizations level illustrate the main characteristics of each framework. At international organization level gives precedence to the interoperability of exchanging data and information between different parties. Whereas at national level shows the importance of the acknowledgement of using frameworks throughout the country in order to make the frameworks to be effective. After the studies of both international and national organization levels, the thesis shows the summarized tables to answer the fitness to the proposed framework by Dahlberg and Nokkala whether the framework help to consolidate data from various sources with different formats, hierarchies, structures, velocities, and other attributes of data storages. In addition, suggestions and recommendations will be proposed for the future research.
Resumo:
Solid state nuclear magnetic resonance (NMR) spectroscopy is a powerful technique for studying structural and dynamical properties of disordered and partially ordered materials, such as glasses, polymers, liquid crystals, and biological materials. In particular, twodimensional( 2D) NMR methods such as ^^C-^^C correlation spectroscopy under the magicangle- spinning (MAS) conditions have been used to measure structural constraints on the secondary structure of proteins and polypeptides. Amyloid fibrils implicated in a broad class of diseases such as Alzheimer's are known to contain a particular repeating structural motif, called a /5-sheet. However, the details of such structures are poorly understood, primarily because the structural constraints extracted from the 2D NMR data in the form of the so-called Ramachandran (backbone torsion) angle distributions, g{^,'4)), are strongly model-dependent. Inverse theory methods are used to extract Ramachandran angle distributions from a set of 2D MAS and constant-time double-quantum-filtered dipolar recoupling (CTDQFD) data. This is a vastly underdetermined problem, and the stability of the inverse mapping is problematic. Tikhonov regularization is a well-known method of improving the stability of the inverse; in this work it is extended to use a new regularization functional based on the Laplacian rather than on the norm of the function itself. In this way, one makes use of the inherently two-dimensional nature of the underlying Ramachandran maps. In addition, a modification of the existing numerical procedure is performed, as appropriate for an underdetermined inverse problem. Stability of the algorithm with respect to the signal-to-noise (S/N) ratio is examined using a simulated data set. The results show excellent convergence to the true angle distribution function g{(j),ii) for the S/N ratio above 100.
Resumo:
Micromorphology is used to analyze a wide range of sediments. Many microstructures have, as yet, not been analyzed. Rotation structures are the least understood of microstructures: their origin and development forms the basis of this thesis. Direction of rotational movement helps understand formative deformational and depositional processes. Twenty-eight rotation structures were analyzed through two methods of data extraction: (a) angle of grain rotation measured from Nikon NIS software, and (b) visual analyses of grain orientation, neighbouring grainstacks, lineations, and obstructions. Data indicates antithetic rotation is promoted by lubrication, accounting for 79% of counter-clockwise rotation structures while 21 % had clockwise rotation. Rotation structures are formed due to velocity gradients in sediment. Subglacial sediments are sheared due to overlying ice mass stresses. The grains in the sediment are differentially deformed. Research suggests rotation structures are formed under ductile conditions under low shear, low water content, and grain numbers inducing grain-to-grain interaction.
Resumo:
La famille des gènes Hox code pour des facteurs de transcription connus pour leur contribution essentielle à l’élaboration de l’architecture du corps et ce, au sein de tout le règne animal. Au cours de l’évolution chez les vertébrés, les gènes Hox ont été redéfinis pour générer toute une variété de nouveaux tissus/organes. Souvent, cette diversification s’est effectuée via des changements quant au contrôle transcriptionnel des gènes Hox. Chez les mammifères, la fonction de Hoxa13 n’est pas restreinte qu’à l’embryon même, mais s’avère également essentielle pour le développement de la vascularisation fœtale au sein du labyrinthe placentaire, suggérant ainsi que sa fonction au sein de cette structure aurait accompagné l’émergence des espèces placentaires. Au chapitre 2, nous mettons en lumière le recrutement de deux autres gènes Hoxa, soient Hoxa10 et Hoxa11, au compartiment extra-embryonnaire. Nous démontrons que l’expression de Hoxa10, Hoxa11 et Hoxa13 est requise au sein de l’allantoïde, précurseur du cordon ombilical et du système vasculaire fœtal au sein du labyrinthe placentaire. De façon intéressante, nous avons découvert que l’expression des gènes Hoxa10-13 dans l’allantoïde n’est pas restreinte qu’aux mammifères placentaires, mais est également présente chez un vertébré non-placentaire, indiquant que le recrutement des ces gènes dans l’allantoïde précède fort probablement l’émergence des espèces placentaires. Nous avons généré des réarrangements génétiques et utilisé des essais transgéniques pour étudier les mécanismes régulant l’expression des gènes Hoxa dans l’allantoïde. Nous avons identifié un fragment intergénique de 50 kb capable d’induire l’expression d’un gène rapporteur dans l’allantoïde. Cependant, nous avons trouvé que le mécanisme de régulation contrôlant l’expression du gène Hoxa au sein du compartiment extra-embryonnaire est fort complexe et repose sur plus qu’un seul élément cis-régulateur. Au chapitre 3, nous avons utilisé la cartographie génétique du destin cellulaire pour évaluer la contribution globale des cellules exprimant Hoxa13 aux différentes structures embryonnaires. Plus particulièrement, nous avons examiné plus en détail l’analyse de la cartographie du destin cellulaire de Hoxa13 dans les pattes antérieures en développement. Nous avons pu déterminer que, dans le squelette du membre, tous les éléments squelettiques de l’autopode (main), à l’exception de quelques cellules dans les éléments carpiens les plus proximaux, proviennent des cellules exprimant Hoxa13. En contraste, nous avons découvert que, au sein du compartiment musculaire, les cellules exprimant Hoxa13 et leurs descendantes (Hoxa13lin+) s’étendent à des domaines plus proximaux du membre, où ils contribuent à générer la plupart des masses musculaires de l’avant-bras et, en partie, du triceps. De façon intéressante, nous avons découvert que les cellules exprimant Hoxa13 et leurs descendantes ne sont pas distribuées uniformément parmi les différents muscles. Au sein d’une même masse musculaire, les fibres avec une contribution Hoxa13lin+ différente peuvent être identifiées et les fibres avec une contribution semblable sont souvent regroupées ensemble. Ce résultat évoque la possibilité que Hoxa13 soit impliqué dans la mise en place de caractéristiques spécifiques des groupes musculaires, ou la mise en place de connections nerf-muscle. Prises dans leur ensemble, les données ici présentées permettent de mieux comprendre le rôle de Hoxa13 au sein des compartiments embryonnaires et extra-embryonnaires. Par ailleurs, nos résultats seront d’une importance primordiale pour soutenir les futures études visant à expliquer les mécanismes transcriptionnels soutenant la régulation des gènes Hoxa dans les tissus extra-embryonnaires.
Resumo:
L’utilisation d’une méthode d’assimilation de données, associée à un modèle de convection anélastique, nous permet la reconstruction des structures physiques d’une partie de la zone convective située en dessous d’une région solaire active. Les résultats obtenus nous informent sur les processus d’émergence des tubes de champ magnétique au travers de la zone convective ainsi que sur les mécanismes de formation des régions actives. Les données solaires utilisées proviennent de l’instrument MDI à bord de l’observatoire spatial SOHO et concernent principalement la région active AR9077 lors de l’ ́évènement du “jour de la Bastille”, le 14 juillet 2000. Cet évènement a conduit à l’avènement d’une éruption solaire, suivie par une importante éjection de masse coronale. Les données assimilées (magnétogrammes, cartes de températures et de vitesses verticales) couvrent une surface de 175 méga-mètres de coté acquises au niveau photosphérique. La méthode d’assimilation de données employée est le “coup de coude direct et rétrograde”, une méthode de relaxation Newtonienne similaire à la méthode “quasi-linéaire inverse 3D”. Elle présente l’originalité de ne pas nécessiter le calcul des équations adjointes au modèle physique. Aussi, la simplicité de la méthode est un avantage numérique conséquent. Notre étude montre au travers d’un test simple l’applicabilité de cette méthode à un modèle de convection utilisé dans le cadre de l’approximation anélastique. Nous montrons ainsi l’efficacité de cette méthode et révélons son potentiel pour l’assimilation de données solaires. Afin d’assurer l’unicité mathématique de la solution obtenue nous imposons une régularisation dans tout le domaine simulé. Nous montrons enfin que l’intérêt de la méthode employée ne se limite pas à la reconstruction des structures convectives, mais qu’elle permet également l’interpolation optimale des magnétogrammes photosphériques, voir même la prédiction de leur évolution temporelle.