923 resultados para Multi-source Data Fusion


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In coastal waters, physico-chemical and biological properties and constituents vary at different time scales. In the study area of this thesis, within the Archipelago Sea in the northern Baltic Sea, seasonal cycles of light and temperature set preconditions for intra-annual variations, but developments at other temporal scales occur as well. Weather-induced runoffs and currents may alter water properties over the short term, and the consequences over time of eutrophication and global changes are to a degree unpredictable. The dynamic characteristics of northern Baltic Sea waters are further diversified at the archipelago coasts. Water properties may differ in adjacent basins, which are separated by island and underwater thresholds limiting water exchange, making the area not only a mosaic of islands but also one of water masses. Long-term monitoring and in situ observations provide an essential data reserve for coastal management and research. Since the seasonal amplitudes of water properties are so high, inter-annual comparisons of water-quality variables have to be based on observations sampled at the same time each year. In this thesis I compare areas by their temporal characteristics, using both inter-annual and seasonal data. After comparing spatial differences in seasonal cycles, I conclude that spatial comparisons and temporal generalizations have to be made with caution. In classifying areas by the state of their waters, the results may be biased even if the sampling is annually simultaneous, since the dynamics of water properties may vary according to the area. The most comprehensive view of the spatiotemporal dynamics of water properties would be achieved by means of comparisons with data consisting of multiple annual samples. For practical reasons, this cannot be achieved with conventional in situ sampling. A holistic understanding of the spatiotemporal features of the water properties of the Archipelago Sea will have to be based on the application of multiple methods, complementing each other’s spatial and temporal coverage. The integration of multi-source observational data and time-series analysis may be methodologically challenging, but it will yield new information as to the spatiotemporal regime of the Archipelago Sea.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The aim of this dissertation was to examine the skills and knowledge that pre-service teachers and teachers have and need about working with multilingual and multicultural students from immigrant backgrounds. The specific goals were to identify pre-service teachers’ and practising teachers’ current knowledge and awareness of culturally and linguistically responsive teaching, identify a profile of their strengths and needs, and devise appropriate professional development support and ways to prepare teachers to become equitable culturally responsive practitioners. To investigate these issues, the dissertation reports on six original empirical studies within two groups of teachers: international pre-service teacher education students from over 25 different countries as well as pre-service and practising Finnish teachers. The international pre-service teacher sample consisted of (n = 38, study I; and n = 45, studies II-IV) and the pre-service and practising Finnish teachers sample encompassed (n = 89, study V; and n = 380, study VI). The data used were multi-source including both qualitative (students’ written work from the course including journals, final reflections, pre- and post-definition of key terms, as well as course evaluation and focus group transcripts) and quantitative (multi-item questionnaires with open-ended options), which enhanced the credibility of the findings resulting in the triangulation of data. Cluster analytic procedures, multivariate analysis of variance (MANOVA), and qualitative analyses mostly Constant Comparative Approach were used to understand pre-service teachers’ and practising teachers’ developing cultural understandings. The results revealed that the mainly white / mainstream teacher candidates in teacher education programmes bring limited background experiences, prior socialisation, and skills about diversity. Taking a multicultural education course where identity development was a focus, positively influenced teacher candidates’ knowledge and attitudes toward diversity. The results revealed approaches and strategies that matter most in preparing teachers for culturally responsive teaching, including but not exclusively, small group activities and discussions, critical reflection, and field immersion. This suggests that there are already some tools to address the need for the support needed to teach successfully a diversity of pupils and provide in-service training for those already practising the teaching profession. The results provide insight into aspects of teachers’ knowledge about both the linguistic and cultural needs of their students, as well as what constitutes a repertoire of approaches and strategies to assure students’ academic success. Teachers’ knowledge of diversity can be categorised into sound awareness, average awareness, and low awareness. Knowledge of diversity was important in teachers’ abilities to use students’ language and culture to enhance acquisition of academic content, work effectively with multilingual learners’ parents/guardians, learn about the cultural backgrounds of multilingual learners, link multilingual learners’ prior knowledge and experience to instruction, and modify classroom instruction for multilingual learners. These findings support the development of a competency based model and can be used to frame the studies of pre-service teachers, as well as the professional development of practising teachers in increasingly diverse contexts. The present set of studies take on new significance in the current context of increasing waves of migration to Europe in general and Finland in particular. They suggest that teacher education programmes can equip teachers with the necessary attitudes, skills, and knowledge to enable them work effectively with students from different ethnic and language backgrounds as they enter the teaching profession. The findings also help to refine the tools and approaches to measuring the competencies of teachers teaching in mainstream classrooms and candidates in preparation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Coded OFDM is a transmission technique that is used in many practical communication systems. In a coded OFDM system, source data are coded, interleaved and multiplexed for transmission over many frequency sub-channels. In a conventional coded OFDM system, the transmission power of each subcarrier is the same regardless of the channel condition. However, some subcarrier can suffer deep fading with multi-paths and the power allocated to the faded subcarrier is likely to be wasted. In this paper, we compute the FER and BER bounds of a coded OFDM system given as convex functions for a given channel coder, inter-leaver and channel response. The power optimization is shown to be a convex optimization problem that can be solved numerically with great efficiency. With the proposed power optimization scheme, near-optimum power allocation for a given coded OFDM system and channel response to minimize FER or BER under a constant transmission power constraint is obtained

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Many algorithms have been developed to achieve motion segmentation for video surveillance. The algorithms produce varying performances under the infinite amount of changing conditions. It has been recognised that individually these algorithms have useful properties. Fusing the statistical result of these algorithms is investigated, with robust motion segmentation in mind.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a new image data fusion scheme by combining median filtering with self-organizing feature map (SOFM) neural networks. The scheme consists of three steps: (1) pre-processing of the images, where weighted median filtering removes part of the noise components corrupting the image, (2) pixel clustering for each image using self-organizing feature map neural networks, and (3) fusion of the images obtained in Step (2), which suppresses the residual noise components and thus further improves the image quality. It proves that such a three-step combination offers an impressive effectiveness and performance improvement, which is confirmed by simulations involving three image sensors (each of which has a different noise structure).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We describe a model-data fusion (MDF) inter-comparison project (REFLEX), which compared various algorithms for estimating carbon (C) model parameters consistent with both measured carbon fluxes and states and a simple C model. Participants were provided with the model and with both synthetic net ecosystem exchange (NEE) of CO2 and leaf area index (LAI) data, generated from the model with added noise, and observed NEE and LAI data from two eddy covariance sites. Participants endeavoured to estimate model parameters and states consistent with the model for all cases over the two years for which data were provided, and generate predictions for one additional year without observations. Nine participants contributed results using Metropolis algorithms, Kalman filters and a genetic algorithm. For the synthetic data case, parameter estimates compared well with the true values. The results of the analyses indicated that parameters linked directly to gross primary production (GPP) and ecosystem respiration, such as those related to foliage allocation and turnover, or temperature sensitivity of heterotrophic respiration, were best constrained and characterised. Poorly estimated parameters were those related to the allocation to and turnover of fine root/wood pools. Estimates of confidence intervals varied among algorithms, but several algorithms successfully located the true values of annual fluxes from synthetic experiments within relatively narrow 90% confidence intervals, achieving >80% success rate and mean NEE confidence intervals <110 gC m−2 year−1 for the synthetic case. Annual C flux estimates generated by participants generally agreed with gap-filling approaches using half-hourly data. The estimation of ecosystem respiration and GPP through MDF agreed well with outputs from partitioning studies using half-hourly data. Confidence limits on annual NEE increased by an average of 88% in the prediction year compared to the previous year, when data were available. Confidence intervals on annual NEE increased by 30% when observed data were used instead of synthetic data, reflecting and quantifying the addition of model error. Finally, our analyses indicated that incorporating additional constraints, using data on C pools (wood, soil and fine roots) would help to reduce uncertainties for model parameters poorly served by eddy covariance data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The multi-relational Data Mining approach has emerged as alternative to the analysis of structured data, such as relational databases. Unlike traditional algorithms, the multi-relational proposals allow mining directly multiple tables, avoiding the costly join operations. In this paper, is presented a comparative study involving the traditional Patricia Mine algorithm and its corresponding multi-relational proposed, MR-Radix in order to evaluate the performance of two approaches for mining association rules are used for relational databases. This study presents two original contributions: the proposition of an algorithm multi-relational MR-Radix, which is efficient for use in relational databases, both in terms of execution time and in relation to memory usage and the presentation of the empirical approach multirelational advantage in performance over several tables, which avoids the costly join operations from multiple tables. © 2011 IEEE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Multi-relational data mining enables pattern mining from multiple tables. The existing multi-relational mining association rules algorithms are not able to process large volumes of data, because the amount of memory required exceeds the amount available. The proposed algorithm MRRadix presents a framework that promotes the optimization of memory usage. It also uses the concept of partitioning to handle large volumes of data. The original contribution of this proposal is enable a superior performance when compared to other related algorithms and moreover successfully concludes the task of mining association rules in large databases, bypass the problem of available memory. One of the tests showed that the MR-Radix presents fourteen times less memory usage than the GFP-growth. © 2011 IEEE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Este trabalho procura examinar o que o leitor brasileiro contemporâneo lê, com o propósito de explicar as razões que levam esse leitor a realizar suas escolhas. Nesse sentido, portanto, o objetivo central do trabalho será examinar o perfil desse leitor brasileiro. O levantamento dos dados para estabelecer o corpus da pesquisa foi realizado por meio do registro das listas de livros mais vendidos, publicadas em dois jornais brasileiros. O primeiro jornal fonte da pesquisa foi o Leia, periódico mensal que circulou no território nacional durante o período de abril de 1978 a setembro de 1991. O segundo, foi o Jornal do Brasil, diário carioca que publicou listas dos livros mais vendidos no Brasil a partir de 1966 até o mês de dezembro de 2004, data de encerramento da pesquisa, em caderno destinado à leitura. Como o segundo jornal interrompeu a publicação das listas dos mais vendidos durante o período de fevereiro de 1976 a abril de 1984, propusemos uma fusão dos dados dos dois jornais de forma a cobrir um período que compreende os anos de 1966 até 2004. A base teórica a partir da qual se estabeleceu o exame do perfil do leitor brasileiro foi a semiótica da escola de Paris. Para o tratamento da questão da leitura elegeu-se o exame das manifestações da enunciação no discurso, as projeções do enunciador e do enunciatário e o tratamento das paixões. Foram observados em cada um dos textos do corpus como essas categorias enunciativas projetam-se em cada um dos textos mais lidos pelos leitores brasileiros e, posteriormente, como, nas listas dos livros mais vendidos, esse leitor manifesta-se como enunciador. Para tanto propôs-se a contraposição entre o ethos do enunciador-leitor das listas e o pathos do enunciatário dos discursos de leitura. Uma vez que o corpus da pesquisa revelou um crescimento na opção pelos textos de auto-ajuda, foi examinada a questão específica... (Resumo completo, clicar acesso eletrônico abaixo)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In multi-label classification, examples can be associated with multiple labels simultaneously. The task of learning from multi-label data can be addressed by methods that transform the multi-label classification problem into several single-label classification problems. The binary relevance approach is one of these methods, where the multi-label learning task is decomposed into several independent binary classification problems, one for each label in the set of labels, and the final labels for each example are determined by aggregating the predictions from all binary classifiers. However, this approach fails to consider any dependency among the labels. Aiming to accurately predict label combinations, in this paper we propose a simple approach that enables the binary classifiers to discover existing label dependency by themselves. An experimental study using decision trees, a kernel method as well as Naive Bayes as base-learning techniques shows the potential of the proposed approach to improve the multi-label classification performance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In recent years, the use of Reverse Engineering systems has got a considerable interest for a wide number of applications. Therefore, many research activities are focused on accuracy and precision of the acquired data and post processing phase improvements. In this context, this PhD Thesis deals with the definition of two novel methods for data post processing and data fusion between physical and geometrical information. In particular a technique has been defined for error definition in 3D points’ coordinates acquired by an optical triangulation laser scanner, with the aim to identify adequate correction arrays to apply under different acquisition parameters and operative conditions. Systematic error in data acquired is thus compensated, in order to increase accuracy value. Moreover, the definition of a 3D thermogram is examined. Object geometrical information and its thermal properties, coming from a thermographic inspection, are combined in order to have a temperature value for each recognizable point. Data acquired by an optical triangulation laser scanner are also used to normalize temperature values and make thermal data independent from thermal-camera point of view.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Nowadays communication is switching from a centralized scenario, where communication media like newspapers, radio, TV programs produce information and people are just consumers, to a completely different decentralized scenario, where everyone is potentially an information producer through the use of social networks, blogs, forums that allow a real-time worldwide information exchange. These new instruments, as a result of their widespread diffusion, have started playing an important socio-economic role. They are the most used communication media and, as a consequence, they constitute the main source of information enterprises, political parties and other organizations can rely on. Analyzing data stored in servers all over the world is feasible by means of Text Mining techniques like Sentiment Analysis, which aims to extract opinions from huge amount of unstructured texts. This could lead to determine, for instance, the user satisfaction degree about products, services, politicians and so on. In this context, this dissertation presents new Document Sentiment Classification methods based on the mathematical theory of Markov Chains. All these approaches bank on a Markov Chain based model, which is language independent and whose killing features are simplicity and generality, which make it interesting with respect to previous sophisticated techniques. Every discussed technique has been tested in both Single-Domain and Cross-Domain Sentiment Classification areas, comparing performance with those of other two previous works. The performed analysis shows that some of the examined algorithms produce results comparable with the best methods in literature, with reference to both single-domain and cross-domain tasks, in $2$-classes (i.e. positive and negative) Document Sentiment Classification. However, there is still room for improvement, because this work also shows the way to walk in order to enhance performance, that is, a good novel feature selection process would be enough to outperform the state of the art. Furthermore, since some of the proposed approaches show promising results in $2$-classes Single-Domain Sentiment Classification, another future work will regard validating these results also in tasks with more than $2$ classes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

For the first time we present a multi-proxy data set for the Russian Altai, consisting of Siberian larch tree-ring width (TRW), latewood density (MXD), δ13C and δ18O in cellulose chronologies obtained for the period 1779–2007 and cell wall thickness (CWT) for 1900–2008. All of these parameters agree well between each other in the high-frequency variability, while the low-frequency climate information shows systematic differences. The correlation analysis with temperature and precipitation data from the closest weather station and gridded data revealed that annual TRW, MXD, CWT, and δ13C data contain a strong summer temperature signal, while δ18O in cellulose represents a mixed summer and winter temperature and precipitation signal. The temperature and precipitation reconstructions from the Belukha ice core and Teletskoe lake sediments were used to investigate the correspondence of different independent proxies. Low frequency patterns in TRW and δ13C chronologies are consistent with temperature reconstructions from nearby Belukha ice core and Teletskoe lake sediments showing a pronounced warming trend in the last century. Their combination could be used for the regional temperature reconstruction. The long-term δ18O trend agrees with the precipitation reconstruction from the Teletskoe lake sediment indicating more humid conditions during the twentieth century. Therefore, these two proxies could be combined for the precipitation reconstruction.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Purpose: Development of an interpolation algorithm for re‐sampling spatially distributed CT‐data with the following features: global and local integral conservation, avoidance of negative interpolation values for positively defined datasets and the ability to control re‐sampling artifacts. Method and Materials: The interpolation can be separated into two steps: first, the discrete CT‐data has to be continuously distributed by an analytic function considering the boundary conditions. Generally, this function is determined by piecewise interpolation. Instead of using linear or high order polynomialinterpolations, which do not fulfill all the above mentioned features, a special form of Hermitian curve interpolation is used to solve the interpolation problem with respect to the required boundary conditions. A single parameter is determined, by which the behavior of the interpolation function is controlled. Second, the interpolated data have to be re‐distributed with respect to the requested grid. Results: The new algorithm was compared with commonly used interpolation functions based on linear and second order polynomial. It is demonstrated that these interpolation functions may over‐ or underestimate the source data by about 10%–20% while the parameter of the new algorithm can be adjusted in order to significantly reduce these interpolation errors. Finally, the performance and accuracy of the algorithm was tested by re‐gridding a series of X‐ray CT‐images. Conclusion: Inaccurate sampling values may occur due to the lack of integral conservation. Re‐sampling algorithms using high order polynomialinterpolation functions may result in significant artifacts of the re‐sampled data. Such artifacts can be avoided by using the new algorithm based on Hermitian curve interpolation

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper considers a framework where data from correlated sources are transmitted with the help of network coding in ad hoc network topologies. The correlated data are encoded independently at sensors and network coding is employed in the intermediate nodes in order to improve the data delivery performance. In such settings, we focus on the problem of reconstructing the sources at decoder when perfect decoding is not possible due to losses or bandwidth variations. We show that the source data similarity can be used at decoder to permit decoding based on a novel and simple approximate decoding scheme. We analyze the influence of the network coding parameters and in particular the size of finite coding fields on the decoding performance. We further determine the optimal field size that maximizes the expected decoding performance as a trade-off between information loss incurred by limiting the resolution of the source data and the error probability in the reconstructed data. Moreover, we show that the performance of the approximate decoding improves when the accuracy of the source model increases even with simple approximate decoding techniques. We provide illustrative examples showing how the proposed algorithm can be deployed in sensor networks and distributed imaging applications.