934 resultados para Fractal time-space
Resumo:
Embora tenha sido proposto que a vasculatura retínica apresenta estrutura fractal, nenhuma padronização do método de segmentação ou do método de cálculo das dimensões fractais foi realizada. Este estudo objetivou determinar se a estimação das dimensões fractais da vasculatura retínica é dependente dos métodos de segmentação vascular e dos métodos de cálculo de dimensão. Métodos: Dez imagens retinográficas foram segmentadas para extrair suas árvores vasculares por quatro métodos computacionais (“multithreshold”, “scale-space”, “pixel classification” e “ridge based detection”). Suas dimensões fractais de “informação”, de “massa-raio” e “por contagem de caixas” foram então calculadas e comparadas com as dimensões das mesmas árvores vasculares, quando obtidas pela segmentação manual (padrão áureo). Resultados: As médias das dimensões fractais variaram através dos grupos de diferentes métodos de segmentação, de 1,39 a 1,47 para a dimensão por contagem de caixas, de 1,47 a 1,52 para a dimensão de informação e de 1,48 a 1,57 para a dimensão de massa-raio. A utilização de diferentes métodos computacionais de segmentação vascular, bem como de diferentes métodos de cálculo de dimensão, introduziu diferença estatisticamente significativa nos valores das dimensões fractais das árvores vasculares. Conclusão: A estimação das dimensões fractais da vasculatura retínica foi dependente tanto dos métodos de segmentação vascular, quanto dos métodos de cálculo de dimensão utilizados
Resumo:
Neste trabalho será apresentado um método recente de compressão de imagens baseado na teoria dos Sistemas de Funções Iteradas (SFI), designado por Compressão Fractal. Descrever-se-á um modelo contínuo para a compressão fractal sobre o espaço métrico completo Lp, onde será definido um operador de transformação fractal contractivo associado a um SFI local com aplicações. Antes disso, será introduzida a teoria dos SFIs no espaço de Hausdorff ou espaço fractal, a teoria dos SFIs Locais - uma generalização dos SFIs - e dos SFIs no espaço Lp. Fornecida a fundamentação teórica para o método será apresentado detalhadamente o algoritmo de compressão fractal. Serão também descritas algumas estratégias de particionamento necessárias para encontrar o SFI com aplicações, assim como, algumas estratégias para tentar colmatar o maior entrave da compressão fractal: a complexidade de codificação. Esta dissertação assumirá essencialmente um carácter mais teórico e descritivo do método de compressão fractal, e de algumas técnicas, já implementadas, para melhorar a sua eficácia.
Resumo:
Stochastic methods based on time-series modeling combined with geostatistics can be useful tools to describe the variability of water-table levels in time and space and to account for uncertainty. Monitoring water-level networks can give information about the dynamic of the aquifer domain in both dimensions. Time-series modeling is an elegant way to treat monitoring data without the complexity of physical mechanistic models. Time-series model predictions can be interpolated spatially, with the spatial differences in water-table dynamics determined by the spatial variation in the system properties and the temporal variation driven by the dynamics of the inputs into the system. An integration of stochastic methods is presented, based on time-series modeling and geostatistics as a framework to predict water levels for decision making in groundwater management and land-use planning. The methodology is applied in a case study in a Guarani Aquifer System (GAS) outcrop area located in the southeastern part of Brazil. Communication of results in a clear and understandable form, via simulated scenarios, is discussed as an alternative, when translating scientific knowledge into applications of stochastic hydrogeology in large aquifers with limited monitoring network coverage like the GAS.
Resumo:
Le système éducatif encourage une histoire positiviste, ordonnée, unilatérale et universelle; par l´incorporation de le découpage chronologique de l´histoire en quatre étapes. Mais, est-ce qu´il serait posible que les élèves puissent étudier leur propre présent? Mon commuication poursuit d´exposer, comme Saab affirmait, le présent est “le point de départ et d´arrivée de l´enseignement de l´histoire détermine les allers et les retours au passé”. La façon d´approcher l´enseignement de l´histoire est confortable. Il n´y a pas de questions, il n´y a pas de discussions. Cette vision de l´histoire interprétée par l´homme blancoccidental-hétérosexuel s´inscrit dans le projet de la modernité du Siècle des Lumières. Par conséquent, cette histoire obvie que nous vivons dans una société postmoderne de la suspicion, de la pensée débile. En ce qui concerne la problématique autour de la pollution audiovisuelle et la façon dont les enseignants et les élèves sont quotidiennement confrontés à ce problème. Par conséquent, il est nécessaire de réfléchir à la question de l´enseignement de l´histoire quadripartite. Actuellement, les médias et les nouvelles technologies sont en train de changer la vie de l´humanité. Il est indispensable que l´élève connaisse son histoire presente et les scénarioshistoriques dans l´avenir. Je pense en la nécessité d´adopter une didactique de l’histoire presente et par conséquent, nous devons utiliser la maîtrise des médias et de l´information. Il faut une formation des enseignants que pose, comme Gadamer a dit: “le passé y le présent se trouvent par une négociation permanente”. Una formation des enseignants qui permette de comprendre et penser l´histoire future / les histoires futures. À mon avis, si les élèves comprennent la complexité de leur monde et leurs multiples visions, les élèves seront plus tolérantes et empathiques.
Resumo:
The purpose of this research is to examine the role of the mining company office in the management of the copper industry in Michigan’s Keweenaw Peninsula between 1901 and 1946. Two of the largest and most influential companies were examined – the Calumet & Hecla Mining Company and the Quincy Mining Company. Both companies operated for more than forty years under general managers who were arguably the most influential people in the management of each company. James MacNaughton, general manager at Calumet and Hecla, worked from 1901 through 1941; Charles Lawton, general manager at Quincy Mining Company, worked from 1905 through 1946. In this case, both of these managers were college-educated engineers and adopted scientific management techniques to operate their respective companies. This research focused on two main goals. The first goal of this project was to address the managerial changes in Michigan’s copper mining offices of the early twentieth century. This included the work of MacNaughton and Lawton, along with analysis of the office structures themselves and what changes occurred through time. The second goal of the project was to create a prototype virtual exhibit for use at the Quincy Mining Company office. A virtual exhibit will allow visitors the opportunity to visit the office virtually, experiencing the office as an office worker would have in the early twentieth century. To meet both goals, this project used various research materials, including archival sources, oral histories, and material culture to recreate the history of mining company management in the Copper Country.
Resumo:
Every space launch increases the overall amount of space debris. Satellites have limited awareness of nearby objects that might pose a collision hazard. Astrometric, radiometric, and thermal models for the study of space debris in low-Earth orbit have been developed. This modeled approach proposes analysis methods that provide increased Local Area Awareness for satellites in low-Earth and geostationary orbit. Local Area Awareness is defined as the ability to detect, characterize, and extract useful information regarding resident space objects as they move through the space environment surrounding a spacecraft. The study of space debris is of critical importance to all space-faring nations. Characterization efforts are proposed using long-wave infrared sensors for space-based observations of debris objects in low-Earth orbit. Long-wave infrared sensors are commercially available and do not require solar illumination to be observed, as their received signal is temperature dependent. The characterization of debris objects through means of passive imaging techniques allows for further studies into the origination, specifications, and future trajectory of debris objects. Conclusions are made regarding the aforementioned thermal analysis as a function of debris orbit, geometry, orientation with respect to time, and material properties. Development of a thermal model permits the characterization of debris objects based upon their received long-wave infrared signals. Information regarding the material type, size, and tumble-rate of the observed debris objects are extracted. This investigation proposes the utilization of long-wave infrared radiometric models of typical debris to develop techniques for the detection and characterization of debris objects via signal analysis of unresolved imagery. Knowledge regarding the orbital type and semi-major axis of the observed debris object are extracted via astrometric analysis. This knowledge may aid in the constraint of the admissible region for the initial orbit determination process. The resultant orbital information is then fused with the radiometric characterization analysis enabling further characterization efforts of the observed debris object. This fused analysis, yielding orbital, material, and thermal properties, significantly increases a satellite’s Local Area Awareness via an intimate understanding of the debris environment surrounding the spacecraft.
Resumo:
This thesis is concerned with change point analysis for time series, i.e. with detection of structural breaks in time-ordered, random data. This long-standing research field regained popularity over the last few years and is still undergoing, as statistical analysis in general, a transformation to high-dimensional problems. We focus on the fundamental »change in the mean« problem and provide extensions of the classical non-parametric Darling-Erdős-type cumulative sum (CUSUM) testing and estimation theory within highdimensional Hilbert space settings. In the first part we contribute to (long run) principal component based testing methods for Hilbert space valued time series under a rather broad (abrupt, epidemic, gradual, multiple) change setting and under dependence. For the dependence structure we consider either traditional m-dependence assumptions or more recently developed m-approximability conditions which cover, e.g., MA, AR and ARCH models. We derive Gumbel and Brownian bridge type approximations of the distribution of the test statistic under the null hypothesis of no change and consistency conditions under the alternative. A new formulation of the test statistic using projections on subspaces allows us to simplify the standard proof techniques and to weaken common assumptions on the covariance structure. Furthermore, we propose to adjust the principal components by an implicit estimation of a (possible) change direction. This approach adds flexibility to projection based methods, weakens typical technical conditions and provides better consistency properties under the alternative. In the second part we contribute to estimation methods for common changes in the means of panels of Hilbert space valued time series. We analyze weighted CUSUM estimates within a recently proposed »high-dimensional low sample size (HDLSS)« framework, where the sample size is fixed but the number of panels increases. We derive sharp conditions on »pointwise asymptotic accuracy« or »uniform asymptotic accuracy« of those estimates in terms of the weighting function. Particularly, we prove that a covariance-based correction of Darling-Erdős-type CUSUM estimates is required to guarantee uniform asymptotic accuracy under moderate dependence conditions within panels and that these conditions are fulfilled, e.g., by any MA(1) time series. As a counterexample we show that for AR(1) time series, close to the non-stationary case, the dependence is too strong and uniform asymptotic accuracy cannot be ensured. Finally, we conduct simulations to demonstrate that our results are practically applicable and that our methodological suggestions are advantageous.
Resumo:
The current approach to data analysis for the Laser Interferometry Space Antenna (LISA) depends on the time delay interferometry observables (TDI) which have to be generated before any weak signal detection can be performed. These are linear combinations of the raw data with appropriate time shifts that lead to the cancellation of the laser frequency noises. This is possible because of the multiple occurrences of the same noises in the different raw data. Originally, these observables were manually generated starting with LISA as a simple stationary array and then adjusted to incorporate the antenna's motions. However, none of the observables survived the flexing of the arms in that they did not lead to cancellation with the same structure. The principal component approach is another way of handling these noises that was presented by Romano and Woan which simplified the data analysis by removing the need to create them before the analysis. This method also depends on the multiple occurrences of the same noises but, instead of using them for cancellation, it takes advantage of the correlations that they produce between the different readings. These correlations can be expressed in a noise (data) covariance matrix which occurs in the Bayesian likelihood function when the noises are assumed be Gaussian. Romano and Woan showed that performing an eigendecomposition of this matrix produced two distinct sets of eigenvalues that can be distinguished by the absence of laser frequency noise from one set. The transformation of the raw data using the corresponding eigenvectors also produced data that was free from the laser frequency noises. This result led to the idea that the principal components may actually be time delay interferometry observables since they produced the same outcome, that is, data that are free from laser frequency noise. The aims here were (i) to investigate the connection between the principal components and these observables, (ii) to prove that the data analysis using them is equivalent to that using the traditional observables and (ii) to determine how this method adapts to real LISA especially the flexing of the antenna. For testing the connection between the principal components and the TDI observables a 10x 10 covariance matrix containing integer values was used in order to obtain an algebraic solution for the eigendecomposition. The matrix was generated using fixed unequal arm lengths and stationary noises with equal variances for each noise type. Results confirm that all four Sagnac observables can be generated from the eigenvectors of the principal components. The observables obtained from this method however, are tied to the length of the data and are not general expressions like the traditional observables, for example, the Sagnac observables for two different time stamps were generated from different sets of eigenvectors. It was also possible to generate the frequency domain optimal AET observables from the principal components obtained from the power spectral density matrix. These results indicate that this method is another way of producing the observables therefore analysis using principal components should give the same results as that using the traditional observables. This was proven by fact that the same relative likelihoods (within 0.3%) were obtained from the Bayesian estimates of the signal amplitude of a simple sinusoidal gravitational wave using the principal components and the optimal AET observables. This method fails if the eigenvalues that are free from laser frequency noises are not generated. These are obtained from the covariance matrix and the properties of LISA that are required for its computation are the phase-locking, arm lengths and noise variances. Preliminary results of the effects of these properties on the principal components indicate that only the absence of phase-locking prevented their production. The flexing of the antenna results in time varying arm lengths which will appear in the covariance matrix and, from our toy model investigations, this did not prevent the occurrence of the principal components. The difficulty with flexing, and also non-stationary noises, is that the Toeplitz structure of the matrix will be destroyed which will affect any computation methods that take advantage of this structure. In terms of separating the two sets of data for the analysis, this was not necessary because the laser frequency noises are very large compared to the photodetector noises which resulted in a significant reduction in the data containing them after the matrix inversion. In the frequency domain the power spectral density matrices were block diagonals which simplified the computation of the eigenvalues by allowing them to be done separately for each block. The results in general showed a lack of principal components in the absence of phase-locking except for the zero bin. The major difference with the power spectral density matrix is that the time varying arm lengths and non-stationarity do not show up because of the summation in the Fourier transform.
Resumo:
This thesis presents research into the space use of a specialist reedbed Passerine, the Bearded Reedling, or Bearded Tit, Panurus biarmicus, with a view to inform the conservation of this species and reedbeds as a whole. How a species uses space, and how space use changes between individuals or over time, can influence: the ability to forage and hunt effectively, breeding success, susceptibility to predation, genetic health, disease spread, robustness against environmental change and ultimately, colonisation or extinction. Thus, understanding the space use of animals can provide critical insight into ecological systems. Birds offer interesting models when studying animal space use, as, by being intrinsically mobile, many bird species can occupy multiple spatial scales. As a consequence of being completely dependent on patchy and ephemeral reedbed habitats, the Bearded Reedling, has a clustered, inhomogeneous distribution throughout its range. This drives the existence of distinct spatial scales upon which space use studies should be characterised. Distribution and movement within a single reedbed can be considered local-scale, while spatial processes between reedbeds can be considered wide-scale. Temporal processes may act upon both of these scales. For example, changing interactions with predators may influence nest positioning at a local-scale, while seasonal changes in resource requirements might drive processes such as migration at a wide-scale. The Bearded Reedling has a wide temperate breeding range, extending over much of Eurasia. On the IUCN’s red list, it is listed as ‘of least concern’, with an estimated European population between 240,000-480,000 breeding pairs. Despite its relatively favourable conservation status, its dependence on reedbed habitats drives a fragmented distribution, with populations being concentrated in small, isolated, stands. Over the last century reedbed wetlands have suffered rapid declines caused by drainage schemes undertaken to improve land for development or agriculture. Additionally, many remaining reed stands are subject to extensive commercial management to produce thatch or biofuel. Conversely, in other areas, management is driven by conservation motives which recognise the present threats to reedbeds, and aim to encourage the diversity of species associated with these habitats. As the Bearded Reedling is fundamentally linked to the quality and structure of a reed stand, understanding the space use of this species will offer information for the direct conservation of this specialist species, and for the effects of reedbed management as a whole. This thesis first presents studies of space use at a local-scale. All local-scale research is conducted at the Tay Reedbeds in eastern Scotland. Mist netting and bird ringing data are used within capture recapture models, which include an explicit spatial component, to gain insight into the abundance of the Bearded Reedling on the Tay. This abundance estimation approach suggests the Tay reedbeds are a stronghold for this species on the British Isles, and that, as a high latitude site, the Tay may have importance for range expansion. A combination of transect surveys and radio-tracking data are then used to establish the local-scale space use of this species during the breeding and autumnal seasons. These data are related to changes in the structure of reed caused by local management in the form of mosaic winter reed cutting. Results suggest that birds exploit young and cut patches of reed as foraging resources when they are available, and that old, unmanaged reed is critical for nesting and winter foraging. Further local-scale studies concern the spatial patterns in the nesting habits of this species. Mosaic reed cutting creates clear edges in a reedbed. Artificial nests placed in the Tay Reedbeds demonstrate increased nest predation rates closer to the edges of cut patches. Additionally, high predation rates become reduced as the cut reed re-grows, suggesting that reed cutting may increase accessibility of the stand to predators. As Bearded Reedling nests are uncommon and difficult to locate, the timing, site selection and structure of a sample of real nests from the Tay is then detailed. These demonstrate an early, and relatively rigid breeding onset in this species, the importance of dense, compacted reeds as nesting sites and a degree of flexibility in nest structure. Conservation efforts will also benefit from studies into wide-scale spatial processes. These may be important when establishing how colonisation events occur and when predicting the effects of climatic change. The Bearded Reedling has been traditionally considered a resident species which only occasionally undertakes wide-scale, between-reedbed, movements. Indeed, the ecology of this species suggests strict year round local residency to reedbeds, with distinct seasonal changes in diet allowing occupation of these habitats year round. The European ringing recoveries of this species, since the 1970s are investigated to better characterise the wider movements of specialist resident. These suggest residency in southern populations, but higher instances of movement than expected in more northerly regions. In these regions wide-scale movement patterns resemble those of partial regular migratory species. An understanding of local and wide-scale spatial processes can offer a strong foundation on which to build conservation strategies. This thesis aims to use studies of space use to provide this foundation for the Bearded Reedling and offer further insight into the ecology of reedbed habitats as a whole. The thesis concludes by proposing an effective strategy for the conservation management of reedbeds that will especially benefit the Bearded Reedling.
Resumo:
This dissertation describes the new compositional system introduced by Scriabin in 1909– 1910, focusing on Feuillet d’Album op. 58, Poème op. 59, nº1, Prélude op. 59, nº2 and Promethée op. 60. Based upon exhaustive pitch and formal analysis the present study (a) claims the inexistence of non-functional pitches in all analysed works, (b) shows that transpositional procedures have structural consequences on the “basic chord”, and (c) for the first time advances an explanation on the intrinsic relation between the sonata form and the slow Luce line in Promethée op. 60; RESUMO: Sob o título de “Alexander Scriabin: a definição dum novo espaço sonoro na crise da Tonalidade”, a presente tese descreve o novo sistema compositivo introduzido por Scriabin em 1909– 1910, tomando como ponto de partida o estudo de Feuillet d’Album op. 58, Poème op. 59, nº1, Prélude op. 59, nº2 e Promethée op. 60. Baseando-se numa análise exaustiva das alturas e da forma, este estudo (a) conclui pela inexistência de alturas não funcionais em qualquer das obras analisadas, (b) mostra que os procedimentos transpositivos têm consequências estruturais no “acorde básico”, e (c) pela primeira vez explica a estrutura formal de Promethée op. 60 a partir da relação intrínseca entre a sua forma sonata e a linha lenta de Luce.
Resumo:
Nesta dissertação estudámos as séries temporais que representam a complexa dinâmica do comportamento. Demos especial atenção às técnicas de dinâmica não linear. As técnicas fornecem-nos uma quantidade de índices quantitativos que servem para descrever as propriedades dinâmicas do sistema. Estes índices têm sido intensivamente usados nos últimos anos em aplicações práticas em Psicologia. Estudámos alguns conceitos básicos de dinâmica não linear, as características dos sistemas caóticos e algumas grandezas que caracterizam os sistemas dinâmicos, que incluem a dimensão fractal, que indica a complexidade de informação contida na série temporal, os expoentes de Lyapunov, que indicam a taxa com que pontos arbitrariamente próximos no espaço de fases da representação do espaço dinâmico, divergem ao longo do tempo, ou a entropia aproximada, que mede o grau de imprevisibilidade de uma série temporal. Esta informação pode então ser usada para compreender, e possivelmente prever, o comportamento. ABSTRACT: ln this thesis we studied the time series that represent the complex dynamic behavior. We focused on techniques of nonlinear dynamics. The techniques provide us a number of quantitative indices used to describe the dynamic properties of the system. These indices have been extensively used in recent years in practical applications in psychology. We studied some basic concepts of nonlinear dynamics, the characteristics of chaotic systems and some quantities that characterize the dynamic systems, including fractal dimension, indicating the complexity of information in the series, the Lyapunov exponents, which indicate the rate at that arbitrarily dose points in phase space representation of a dynamic, vary over time, or the approximate entropy, which measures the degree of unpredictability of a series. This information can then be used to understand and possibly predict the behavior.
Resumo:
Wine aroma is an important characteristic and may be related to certain specific parameters, such as raw material and production process. The complexity of Merlot wine aroma was considered suitable for comprehensive two-dimensional gas chromatography (GCGC), as this technique offers superior performance when compared to one-dimensional gas chromatography (1D-GC). The profile of volatile compounds of Merlot wine was, for the first time, qualitatively analyzed by HS-SPME-GCxGC with a time-of-flight mass spectrometric detector (TOFMS), resulting in 179 compounds tentatively identified by comparison of experimental GCxGC retention indices and mass spectra with literature 1D-GC data and 155 compounds tentatively identified only by mass spectra comparison. A set of GCGC experimental retention indices was also, for the first time, presented for a specific inverse set of columns. Esters were present in higher number (94), followed by alcohols (80), ketones (29), acids (29), aldehydes (23), terpenes (23), lactones (16), furans (14), sulfur compounds (9), phenols (7), pyrroles (5), C13-norisoprenoids (3), and pyrans (2). GCxGC/TOFMS parameters were improved and optimal conditions were: a polar (polyethylene glycol)/medium polar (50% phenyl 50% dimethyl arylene siloxane) column set, oven temperature offset of 10ºC, 7 s as modulation period and 1.4 s of hot pulse duration. Co-elutions came up to 138 compounds in 1D and some of them were resolved in 2D. Among the coeluted compounds, thirty-three volatiles co-eluted in both 1D and 2D and their tentative identification was possible only due to spectral deconvolution. Some compounds that might have important contribution to aroma notes were included in these superimposed peaks. Structurally organized distribution of compounds in the 2D space was observed for esters, aldehydes and ketones, alcohols, thiols, lactones, acids and also inside subgroups, as occurred with esters and alcohols. The Fischer Ratio was useful for establishing the analytes responsible for the main differences between Merlot and non-Merlot wines. Differentiation among Merlot wines and wines of other grape varieties were mainly perceived through the following components: ethyl dodecanoate, 1-hexanol, ethyl nonanoate, ethyl hexanoate, ethyl decanoate, dehydro-2-methyl-3(2H)thiophenone, 3-methyl butanoic acid, ethyl tetradecanoate, methyl octanoate, 1,4 butanediol, and 6-methyloctan-1-ol.
Resumo:
The surface of the Earth is subjected to vertical deformations caused by geophysical and geological processes which can be monitored by Global Positioning System (GPS) observations. The purpose of this work is to investigate GPS height time series to identify interannual signals affecting the Earth’s surface over the European and Mediterranean area, during the period 2001-2019. Thirty-six homogeneously distributed GPS stations were selected from the online dataset made available by the Nevada Geodetic Laboratory (NGL) on the basis of the length and quality of the data series. The Principal Component Analysis (PCA) is the technique applied to extract the main patterns of the space and time variability of the GPS Up coordinate. The time series were studied by means of a frequency analysis using a periodogram and the real-valued Morlet wavelet. The periodogram is used to identify the dominant frequencies and the spectral density of the investigated signals; the second one is applied to identify the signals in the time domain and the relevant periodicities. This study has identified, over European and Mediterranean area, the presence of interannual non-linear signals with a period of 2-to-4 years, possibly related to atmospheric and hydrological loading displacements and to climate phenomena, such as El Niño Southern Oscillation (ENSO). A clear signal with a period of about six years is present in the vertical component of the GPS time series, likely explainable by the gravitational coupling between the Earth’s mantle and the inner core. Moreover, signals with a period in the order of 8-9 years, might be explained by mantle-inner core gravity coupling and the cycle of the lunar perigee, and a signal of 18.6 years, likely associated to lunar nodal cycle, were identified through the wavelet spectrum. However, these last two signals need further confirmation because the present length of the GPS time series is still too short when compared to the periods involved.
Resumo:
In this paper, we explore the benefits of using social media in an online educational setting, with a particular focus on the use of Facebook and Twitter by participants in a Massive Open Online Course (MOOC) developed to enable educators to learn about the Carpe Diem learning design process. We define social media as digital social tools and environments located outside of the provision of a formal university-provided Learning Management System. We use data collected via interviews and surveys with the MOOC participants as well as social media postings made by the participants throughout the MOOC to offer insights into how participants’ usage and perception of social media in their online learning experiences differed and why. We identified that, although some participants benefitted from social media by crediting it, for example, with networking and knowledge-sharing opportunities, others objected or refused to engage with social media, perceiving it as a waste of their time. We make recommendations for the usage of social media for educational purposes within MOOCs and formal digital learning environments.
Resumo:
Grazie all’evoluzione degli strumenti di calcolo e delle strutture digitali, le intelligenze artificiali si sono evolute considerevolmente negli ultimi anni, permettendone sempre nuove e complesse applicazioni. L’interesse del presente progetto di tesi è quello di creare un modello di studio preliminare di intelligenza artificiale definita come Rete Neurale Convoluzionale, o Convolutional Neural Network (CNN), al fine di essere impiegata nel campo della radioscienza e dell’esplorazione planetaria. In particolare, uno degli interessi principali di applicazione del modello è negli studi di geodesia compiuti tramite determinazione orbitale di satelliti artificiali nel loro moto attorno ai corpi celesti. Le accelerazioni causate dai campi gravitazionali planetari perturbano le orbite dei satelliti artificiali, queste variazioni vengono captate dai ricevitori radio a terra sottoforma di shift Doppler della frequenza del segnale, a partire dalla quale è quindi possibile determinare informazioni dettagliate sul campo di gravità e sulla struttura interna del corpo celeste in esame. Per poter fare ciò, occorre riuscire a determinare l’esatta frequenza del segnale in arrivo, il quale, per via di perdite e disturbi durante il suo tragitto, presenterà sempre una componente di rumore. Il metodo più comune per scindere la componente di informazione da quella di rumore e ricavarne la frequenza effettiva è l’applicazione di trasformate di Fourier a tempo breve, o Short-time Fourier Transform (STFT). Con l’attività sperimentale proposta, ci si è quindi posto l’obiettivo di istruire un CNN alla stima della frequenza di segnali reali sinusoidali rumorosi per avere un modello computazionalmente rapido e affidabile a supporto delle operazioni di pre-processing per missioni di radio-scienza.