884 resultados para HISTORICAL DATA-ANALYSIS


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This text is taken from the postgraduate thesis, which one of the authors (A.B.) developed for the degree of Medical Physicist in the School on Medical Physics of the University of Florence. The text explores the feasibility of quantitative Magnetic Resonance Spectroscopy as a tool for daily clinical routine use. The results and analysis comes from two types of hyper spectral images: the first set are hyper spectral images coming from a standard phantom (reference images); and hyper spectral images obtained from a group of patients who have undergone MRI examinations at the Santa Maria Nuova Hospital. This interdisciplinary work stems from the IFAC-CNR know how in terms of data analysis and nanomedicine, and the clinical expertise of Radiologists and Medical Physicists. The results reported here, which were the subject of the thesis, are original, unpublished, and represent independent work.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

For a long time, electronic data analysis has been associated with quantitative methods. However, Computer Assisted Qualitative Data Analysis Software (CAQDAS) are increasingly being developed. Although the CAQDAS has been there for decades, very few qualitative health researchers report using it. This may be due to the difficulties that one has to go through to master the software and the misconceptions that are associated with using CAQDAS. While the issue of mastering CAQDAS has received ample attention, little has been done to address the misconceptions associated with CAQDAS. In this paper, the author reflects on his experience of interacting with one of the popular CAQDAS (NVivo) in order to provide evidence-based implications of using the software. The key message is that unlike statistical software, the main function of CAQDAS is not to analyse data but rather to aid the analysis process, which the researcher must always remain in control of. In other words, researchers must equally know that no software can analyse qualitative data. CAQDAS are basically data management packages, which support the researcher during analysis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this work, we further extend the recently developed adaptive data analysis method, the Sparse Time-Frequency Representation (STFR) method. This method is based on the assumption that many physical signals inherently contain AM-FM representations. We propose a sparse optimization method to extract the AM-FM representations of such signals. We prove the convergence of the method for periodic signals under certain assumptions and provide practical algorithms specifically for the non-periodic STFR, which extends the method to tackle problems that former STFR methods could not handle, including stability to noise and non-periodic data analysis. This is a significant improvement since many adaptive and non-adaptive signal processing methods are not fully capable of handling non-periodic signals. Moreover, we propose a new STFR algorithm to study intrawave signals with strong frequency modulation and analyze the convergence of this new algorithm for periodic signals. Such signals have previously remained a bottleneck for all signal processing methods. Furthermore, we propose a modified version of STFR that facilitates the extraction of intrawaves that have overlaping frequency content. We show that the STFR methods can be applied to the realm of dynamical systems and cardiovascular signals. In particular, we present a simplified and modified version of the STFR algorithm that is potentially useful for the diagnosis of some cardiovascular diseases. We further explain some preliminary work on the nature of Intrinsic Mode Functions (IMFs) and how they can have different representations in different phase coordinates. This analysis shows that the uncertainty principle is fundamental to all oscillating signals.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Comunicação apresentada na 44th SEFI Conference, 12-­15 September 2016, Tampere, Finland

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Analyzing large-scale gene expression data is a labor-intensive and time-consuming process. To make data analysis easier, we developed a set of pipelines for rapid processing and analysis poplar gene expression data for knowledge discovery. Of all pipelines developed, differentially expressed genes (DEGs) pipeline is the one designed to identify biologically important genes that are differentially expressed in one of multiple time points for conditions. Pathway analysis pipeline was designed to identify the differentially expression metabolic pathways. Protein domain enrichment pipeline can identify the enriched protein domains present in the DEGs. Finally, Gene Ontology (GO) enrichment analysis pipeline was developed to identify the enriched GO terms in the DEGs. Our pipeline tools can analyze both microarray gene data and high-throughput gene data. These two types of data are obtained by two different technologies. A microarray technology is to measure gene expression levels via microarray chips, a collection of microscopic DNA spots attached to a solid (glass) surface, whereas high throughput sequencing, also called as the next-generation sequencing, is a new technology to measure gene expression levels by directly sequencing mRNAs, and obtaining each mRNA’s copy numbers in cells or tissues. We also developed a web portal (http://sys.bio.mtu.edu/) to make all pipelines available to public to facilitate users to analyze their gene expression data. In addition to the analyses mentioned above, it can also perform GO hierarchy analysis, i.e. construct GO trees using a list of GO terms as an input.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Thanks to the advanced technologies and social networks that allow the data to be widely shared among the Internet, there is an explosion of pervasive multimedia data, generating high demands of multimedia services and applications in various areas for people to easily access and manage multimedia data. Towards such demands, multimedia big data analysis has become an emerging hot topic in both industry and academia, which ranges from basic infrastructure, management, search, and mining to security, privacy, and applications. Within the scope of this dissertation, a multimedia big data analysis framework is proposed for semantic information management and retrieval with a focus on rare event detection in videos. The proposed framework is able to explore hidden semantic feature groups in multimedia data and incorporate temporal semantics, especially for video event detection. First, a hierarchical semantic data representation is presented to alleviate the semantic gap issue, and the Hidden Coherent Feature Group (HCFG) analysis method is proposed to capture the correlation between features and separate the original feature set into semantic groups, seamlessly integrating multimedia data in multiple modalities. Next, an Importance Factor based Temporal Multiple Correspondence Analysis (i.e., IF-TMCA) approach is presented for effective event detection. Specifically, the HCFG algorithm is integrated with the Hierarchical Information Gain Analysis (HIGA) method to generate the Importance Factor (IF) for producing the initial detection results. Then, the TMCA algorithm is proposed to efficiently incorporate temporal semantics for re-ranking and improving the final performance. At last, a sampling-based ensemble learning mechanism is applied to further accommodate the imbalanced datasets. In addition to the multimedia semantic representation and class imbalance problems, lack of organization is another critical issue for multimedia big data analysis. In this framework, an affinity propagation-based summarization method is also proposed to transform the unorganized data into a better structure with clean and well-organized information. The whole framework has been thoroughly evaluated across multiple domains, such as soccer goal event detection and disaster information management.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Analysis of climate change impacts on streamflow by perturbing the climate inputs has been a concern for many authors in the past few years, but there are few analyses for the impacts on water quality. To examine the impact of change in climate variables on the water quality parameters, the water quality input variables have to be perturbed. The primary input variables that can be considered for such an analysis are streamflow and water temperature, which are affected by changes in precipitation and air temperature, respectively. Using hypothetical scenarios to represent both greenhouse warming and streamflow changes, the sensitivity of the water quality parameters has been evaluated under conditions of altered river flow and river temperature in this article. Historical data analysis of hydroclimatic variables is carried out, which includes flow duration exceedance percentage (e.g. Q90), single low- flow indices (e.g. 7Q10, 30Q10) and relationships between climatic variables and surface variables. For the study region of Tunga-Bhadra river in India, low flows are found to be decreasing and water temperatures are found to be increasing. As a result, there is a reduction in dissolved oxygen (DO) levels found in recent years. Water quality responses of six hypothetical climate change scenarios were simulated by the water quality model, QUAL2K. A simple linear regression relation between air and water temperature is used to generate the scenarios for river water temperature. The results suggest that all the hypothetical climate change scenarios would cause impairment in water quality. It was found that there is a significant decrease in DO levels due to the impact of climate change on temperature and flows, even when the discharges were at safe permissible levels set by pollution control agencies (PCAs). The necessity to improve the standards of PCA and develop adaptation policies for the dischargers to account for climate change is examined through a fuzzy waste load allocation model developed earlier. Copyright (C) 2011 John Wiley & Sons, Ltd.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Trabalho Final de Mestrado para obtenção do grau de Mestre em Engenharia Informática e de Computadores

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Spreading pattern and mesoscale structure of Mediterranean water outflow in the eastern North Atlantic are studied on the basis of historical hydrographical records. Effect of bottom topography on Mediterranean water distribution is revealed. It is shown that the Mediterranean water outflow is divided into two streams after leaving the Gulf of Cadiz. These are northwestern and southwestern ones; the former is more intensive and spreads in more regular and continuous way. West of the Tejo (Tagus) Plateau it splits into three branches; the most intense of them keeps continuity up to 14°W. The less intensive southwestern stream passes south of the Gettysburg Bank and splits into two branches immediately after the Gulf of Cadiz. From 11°W, this stream has lenticular, intermittent character. West of 14°-15°W all Mediterranean water branches are represented mainly by isolated salty patches. As a result of historical data analysis in the 32°-44°N, 8°-22°W area, 30 Mediterranean water lenses have been found; 12 of them had not been previously mentioned in publications. A table of main parameters of Mediterranean water lenses is presented. It includes data of 108 observations from 1911 to 1993.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

With recent advances in remote sensing processing technology, it has become more feasible to begin analysis of the enormous historic archive of remotely sensed data. This historical data provides valuable information on a wide variety of topics which can influence the lives of millions of people if processed correctly and in a timely manner. One such field of benefit is that of landslide mapping and inventory. This data provides a historical reference to those who live near high risk areas so future disasters may be avoided. In order to properly map landslides remotely, an optimum method must first be determined. Historically, mapping has been attempted using pixel based methods such as unsupervised and supervised classification. These methods are limited by their ability to only characterize an image spectrally based on single pixel values. This creates a result prone to false positives and often without meaningful objects created. Recently, several reliable methods of Object Oriented Analysis (OOA) have been developed which utilize a full range of spectral, spatial, textural, and contextual parameters to delineate regions of interest. A comparison of these two methods on a historical dataset of the landslide affected city of San Juan La Laguna, Guatemala has proven the benefits of OOA methods over those of unsupervised classification. Overall accuracies of 96.5% and 94.3% and F-score of 84.3% and 77.9% were achieved for OOA and unsupervised classification methods respectively. The greater difference in F-score is a result of the low precision values of unsupervised classification caused by poor false positive removal, the greatest shortcoming of this method.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In recent years external beam radiotherapy (EBRT) has been proposed as a treatment for the wet form of age-related macular degeneration (AMD) where choroidal neovascularization (CNV) is the hallmark. While the majority of pilot (Phase I) studies have reported encouraging results, a few have found no benefit, i.e. EBRT was not found to result in either improvement or stabilization of visual acuity of the treated eye. The natural history of visual loss in untreated CNV of AMD is highly variable. Loss of vision is influenced mainly by the presenting acuity, and size and composition of the lesion, and to a lesser extent by a variety of other factors. Thus the variable outcome reported by the small Phase I studies of EBRT published to date may simply reflect the variation in baseline factors. We therefore obtained information on 409 patients treated with EBRT from eight independent centres, which included details of visual acuity at baseline and at subsequent follow-up visits. Analysis of the data showed that 22.5% and 14.9% of EBRT-treated eyes developed moderate and severe loss of vision, respectively, during an average follow-up of 13 months. Initial visual acuity, which explained 20.5% of the variation in visual loss, was the most important baseline factor studied. Statistically significant differences in loss of vision were observed between centres, after considering the effects of case mix factors. Comparisons with historical data suggested that while moderate visual loss was similar to that of the natural history of the disease, the likelihood of suffering severe visual loss was halved. However, the benefit in terms of maintained/improved vision in the treated eye was modest.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Social network has gained remarkable attention in the last decade. Accessing social network sites such as Twitter, Facebook LinkedIn and Google+ through the internet and the web 2.0 technologies has become more affordable. People are becoming more interested in and relying on social network for information, news and opinion of other users on diverse subject matters. The heavy reliance on social network sites causes them to generate massive data characterised by three computational issues namely; size, noise and dynamism. These issues often make social network data very complex to analyse manually, resulting in the pertinent use of computational means of analysing them. Data mining provides a wide range of techniques for detecting useful knowledge from massive datasets like trends, patterns and rules [44]. Data mining techniques are used for information retrieval, statistical modelling and machine learning. These techniques employ data pre-processing, data analysis, and data interpretation processes in the course of data analysis. This survey discusses different data mining techniques used in mining diverse aspects of the social network over decades going from the historical techniques to the up-to-date models, including our novel technique named TRCM. All the techniques covered in this survey are listed in the Table.1 including the tools employed as well as names of their authors.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background information: During the late 1970s and the early 1980s, West Germany witnessed a reversal of gender differences in educational attainment, as females began to outperform males. Purpose: The main objective was to analyse which processes were behind the reversal of gender differences in educational attainment after 1945. The theoretical reflections and empirical evidence presented for the US context by DiPrete and Buchmann (Gender-specific trends in the value of education and the emerging gender gap in college completion, Demography 43: 1–24, 2006) and Buchmann, DiPrete, and McDaniel (Gender inequalities in education, Annual Review of Sociology 34: 319–37, 2008) are considered and applied to the West German context. It is suggested that the reversal of gender differences is a consequence of the change in female educational decisions, which are mainly related to labour market opportunities and not, as sometimes assumed, a consequence of a ‘boy’s crisis’. Sample: Several databases, such as the German General Social Survey, the German Socio-economic Panel and the German Life History Study, are employed for the longitudinal analysis of the educational and occupational careers of birth cohorts born in the twentieth century. Design and methods: Changing patterns of eligibility for university studies are analysed for successive birth cohorts and gender. Binary logistic regressions are employed for the statistical modelling of the individuals’ achievement, educational decision and likelihood for social mobility – reporting average marginal effects (AME). Results: The empirical results suggest that women’s better school achievement being constant across cohorts does not contribute to the explanation of the reversal of gender differences in higher education attainment, but the increase of benefits for higher education explains the changing educational decisions of women regarding their transition to higher education. Conclusions: The outperformance of females compared with males in higher education might have been initialised by several social changes, including the expansion of public employment, the growing demand for highly qualified female workers in welfare and service areas, the increasing returns of women’s increased education and training, and the improved opportunities for combining family and work outside the home. The historical data show that, in terms of (married) women’s increased labour market opportunities and female life-cycle labour force participation, the raising rates of women’s enrolment in higher education were – among other reasons – partly explained by their rising access to service class positions across birth cohorts, and the rise of their educational returns in terms of wages and long-term employment.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This research is aimed at addressing problems in the field of asset management relating to risk analysis and decision making based on data from a Supervisory Control and Data Acquisition (SCADA) system. It is apparent that determining risk likelihood in risk analysis is difficult, especially when historical information is unreliable. This relates to a problem in SCADA data analysis because of nested data. A further problem is in providing beneficial information from a SCADA system to a managerial level information system (e.g. Enterprise Resource Planning/ERP). A Hierarchical Model is developed to address the problems. The model is composed of three different Analyses: Hierarchical Analysis, Failure Mode and Effect Analysis, and Interdependence Analysis. The significant contributions from the model include: (a) a new risk analysis model, namely an Interdependence Risk Analysis Model which does not rely on the existence of historical information because it utilises Interdependence Relationships to determine the risk likelihood, (b) improvement of the SCADA data analysis problem by addressing the nested data problem through the Hierarchical Analysis, and (c) presentation of a framework to provide beneficial information from SCADA systems to ERP systems. The case study of a Water Treatment Plant is utilised for model validation.