933 resultados para Databases as Topic


Relevância:

60.00% 60.00%

Publicador:

Resumo:

The affective impact of music arises from a variety of factors, including intensity, tempo, rhythm, and tonal relationships. The emotional coloring evoked by intensity, tempo, and rhythm appears to arise from association with the characteristics of human behavior in the corresponding condition; however, how and why particular tonal relationships in music convey distinct emotional effects are not clear. The hypothesis examined here is that major and minor tone collections elicit different affective reactions because their spectra are similar to the spectra of voiced speech uttered in different emotional states. To evaluate this possibility the spectra of the intervals that distinguish major and minor music were compared to the spectra of voiced segments in excited and subdued speech using fundamental frequency and frequency ratios as measures. Consistent with the hypothesis, the spectra of major intervals are more similar to spectra found in excited speech, whereas the spectra of particular minor intervals are more similar to the spectra of subdued speech. These results suggest that the characteristic affective impact of major and minor tone collections arises from associations routinely made between particular musical intervals and voiced speech.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

BACKGROUND: Malignant glioma is a rare cancer with poor survival. The influence of diet and antioxidant intake on glioma survival is not well understood. The current study examines the association between antioxidant intake and survival after glioma diagnosis. METHODS: Adult patients diagnosed with malignant glioma during 1991-1994 and 1997-2001 were enrolled in a population-based study. Diagnosis was confirmed by review of pathology specimens. A modified food-frequency questionnaire interview was completed by each glioma patient or a designated proxy. Intake of each food item was converted to grams consumed/day. From this nutrient database, 16 antioxidants, calcium, a total antioxidant index and 3 macronutrients were available for survival analysis. Cox regression estimated mortality hazard ratios associated with each nutrient and the antioxidant index adjusting for potential confounders. Nutrient values were categorized into tertiles. Models were stratified by histology (Grades II, III, and IV) and conducted for all (including proxy) subjects and for a subset of self-reported subjects. RESULTS: Geometric mean values for 11 fat-soluble and 6 water-soluble individual antioxidants, antioxidant index and 3 macronutrients were virtually the same when comparing all cases (n=748) to self-reported cases only (n=450). For patients diagnosed with Grade II and Grade III histology, moderate (915.8-2118.3 mcg) intake of fat-soluble lycopene was associated with poorer survival when compared to low intake (0.0-914.8 mcg), for self-reported cases only. High intake of vitamin E and moderate/high intake of secoisolariciresinol among Grade III patients indicated greater survival for all cases. In Grade IV patients, moderate/high intake of cryptoxanthin and high intake of secoisolariciresinol were associated with poorer survival among all cases. Among Grade II patients, moderate intake of water-soluble folate was associated with greater survival for all cases; high intake of vitamin C and genistein and the highest level of the antioxidant index were associated with poorer survival for all cases. CONCLUSIONS: The associations observed in our study suggest that the influence of some antioxidants on survival following a diagnosis of malignant glioma are inconsistent and vary by histology group. Further research in a large sample of glioma patients is needed to confirm/refute our results.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

BACKGROUND: In recent years large bibliographic databases have made much of the published literature of biology available for searches. However, the capabilities of the search engines integrated into these databases for text-based bibliographic searches are limited. To enable searches that deliver the results expected by comparative anatomists, an underlying logical structure known as an ontology is required. DEVELOPMENT AND TESTING OF THE ONTOLOGY: Here we present the Mammalian Feeding Muscle Ontology (MFMO), a multi-species ontology focused on anatomical structures that participate in feeding and other oral/pharyngeal behaviors. A unique feature of the MFMO is that a simple, computable, definition of each muscle, which includes its attachments and innervation, is true across mammals. This construction mirrors the logical foundation of comparative anatomy and permits searches using language familiar to biologists. Further, it provides a template for muscles that will be useful in extending any anatomy ontology. The MFMO is developed to support the Feeding Experiments End-User Database Project (FEED, https://feedexp.org/), a publicly-available, online repository for physiological data collected from in vivo studies of feeding (e.g., mastication, biting, swallowing) in mammals. Currently the MFMO is integrated into FEED and also into two literature-specific implementations of Textpresso, a text-mining system that facilitates powerful searches of a corpus of scientific publications. We evaluate the MFMO by asking questions that test the ability of the ontology to return appropriate answers (competency questions). We compare the results of queries of the MFMO to results from similar searches in PubMed and Google Scholar. RESULTS AND SIGNIFICANCE: Our tests demonstrate that the MFMO is competent to answer queries formed in the common language of comparative anatomy, but PubMed and Google Scholar are not. Overall, our results show that by incorporating anatomical ontologies into searches, an expanded and anatomically comprehensive set of results can be obtained. The broader scientific and publishing communities should consider taking up the challenge of semantically enabled search capabilities.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

BACKGROUND: Renal failure after thoracoabdominal aortic repair is a significant clinical problem. Distal aortic perfusion for organ and spinal cord protection requires cannulation of the left femoral artery. In 2006, we reported the finding that direct cannulation led to leg ischemia in some patients and was associated with increased renal failure. After this finding, we modified our perfusion technique to eliminate leg ischemia from cannulation. In this article, we present the effects of this change on postoperative renal function. METHODS: Between February 1991 and July 2008, we repaired 1464 thoracoabdominal aortic aneurysms. Distal aortic perfusion was used in 1088, and these were studied. Median patient age was 68 years, and 378 (35%) were women. In September 2006, we began to adopt a sidearm femoral cannulation technique that provides distal aortic perfusion while maintaining downstream flow to the leg. This was used in 167 patients (15%). We measured the joint effects of preoperative glomerular filtration rate (GFR) and cannulation technique on the highest postoperative creatinine level, postoperative renal failure, and death. Analysis was by multiple linear or logistic regression with interaction. RESULTS: The preoperative GFR was the strongest predictor of postoperative renal dysfunction and death. No significant main effects of sidearm cannulation were noted. For peak creatinine level and postoperative renal failure, however, strong interactions between preoperative GFR and sidearm cannulation were present, resulting in reductions of postoperative renal complications of 15% to 20% when GFR was <60 mL>/min/1.73 m(2). For normal GFR, the effect was negated or even reversed at very high levels of GFR. Mortality, although not significantly affected by sidearm cannulation, showed a similar trend to the renal outcomes. CONCLUSION: Use of sidearm cannulation is associated with a clinically important and highly statistically significant reduction in postoperative renal complications in patients with a low GFR. Reduced renal effect of skeletal muscle ischemia is the proposed mechanism. Effects among patients with good preoperative renal function are less clear. A randomized trial is needed.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Introduction: Among the inflammatory mediators involved in the pathogenesis of obesity, the cell adhesion molecules P-selectin, E-selectin, VCAM-1, ICAM-1 and the chemokine MCP-1 stand out. They play a crucial role in adherence of cells to endothelial surfaces, in the integrity of the vascular wall and can be modulated by body composition and dietary pattern. Objectives: To describe and discuss the relation of these cell adhesion molecules and chemokines to anthropometric, body composition, dietary and biochemical markers. Methods: Papers were located using scientific databases by topic searches with no restriction on year of publication. Results: All molecules were associated positively with anthropometric markers, but controversial results were found for ICAM-1 and VCAM-1. Not only obesity, but visceral fat is more strongly correlated with E-selectin and MCP-1 levels. Weight loss influences the reduction in the levels of these molecules, except VCAM-1. The distribution of macronutrients, excessive consumption of saturated and trans fat and a Western dietary pattern are associated with increased levels. The opposite could be observed with supplementation of w-3 fatty acid, healthy dietary pattern, high calcium diet and high dairy intake. Regarding the biochemical parameters, they have inverse relation to HDLC and positive relation to total cholesterol, triglycerides, blood glucose, fasting insulin and insulin resistance. Conclusion: Normal anthropometric indicators, body composition, biochemical parameters and eating pattern positively modulate the subclinical inflammation that results from obesity by reducing the cell adhesion molecules and chemokines.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A common challenge that users of academic databases face is making sense of their query outputs for knowledge discovery. This is exacerbated by the size and growth of modern databases. PubMed, a central index of biomedical literature, contains over 25 million citations, and can output search results containing hundreds of thousands of citations. Under these conditions, efficient knowledge discovery requires a different data structure than a chronological list of articles. It requires a method of conveying what the important ideas are, where they are located, and how they are connected; a method of allowing users to see the underlying topical structure of their search. This paper presents VizMaps, a PubMed search interface that addresses some of these problems. Given search terms, our main backend pipeline extracts relevant words from the title and abstract, and clusters them into discovered topics using Bayesian topic models, in particular the Latent Dirichlet Allocation (LDA). It then outputs a visual, navigable map of the query results.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objective: Developing the scientific underpinnings of social welfare requires effective and efficient methods of retrieving relevant items from the increasing volume of research. Method: We compared seven databases by running the nearest equivalent search on each. The search topic was chosen for relevance to social work practice with older people. Results: Highest sensitivity was achieved by Medline (52%), Social Sciences Citation Index (46%) and Cumulative Index of Nursing and Allied Health Literature (CINAHL) (30%). Highest precision was achieved by AgeInfo (76%), PsycInfo (51%) and Social Services Abstracts (41%). Each database retrieved unique relevant articles. Conclusions: Comprehensive searching requires the development of information management skills. The social work profession would benefit from having a dedicated international database with the capability and facilities of major databases such as Medline, CINAHL, and PsycInfo.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Online information seeking has become normative practice among both academics and the general population. This study appraised the performance of eight databases to retrieve research pertaining to the influence of social networking sites on the mental health of young people. A total of 43 empirical studies on young people’s use of social networking sites and the mental health implications were retrieved. Scopus and SSCI had the highest sensitivity with PsycINFO having the highest precision. Effective searching requires large
generic databases, supplemented by subject-specific catalogues. The methodology developed here may provide inexperienced searchers, such as undergraduate students, with a framework to define a realistic scale of searching to undertake for a particular literature review or similar project.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Expert curation and complete collection of mutations in genes that affect human health is essential for proper genetic healthcare and research. Expert curation is given by the curators of gene-specific mutation databases or locus-specific databases (LSDBs). While there are over 700 such databases, they vary in their content, completeness, time available for curation, and the expertise of the curator. Curation and LSDBs have been discussed, written about, and protocols have been provided for over 10 years, but there have been no formal recommendations for the ideal form of these entities. This work initiates a discussion on this topic to assist future efforts in human genetics. Further discussion is welcome.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Study on variable stars is an important topic of modern astrophysics. After the invention of powerful telescopes and high resolving powered CCD’s, the variable star data is accumulating in the order of peta-bytes. The huge amount of data need lot of automated methods as well as human experts. This thesis is devoted to the data analysis on variable star’s astronomical time series data and hence belong to the inter-disciplinary topic, Astrostatistics. For an observer on earth, stars that have a change in apparent brightness over time are called variable stars. The variation in brightness may be regular (periodic), quasi periodic (semi-periodic) or irregular manner (aperiodic) and are caused by various reasons. In some cases, the variation is due to some internal thermo-nuclear processes, which are generally known as intrinsic vari- ables and in some other cases, it is due to some external processes, like eclipse or rotation, which are known as extrinsic variables. Intrinsic variables can be further grouped into pulsating variables, eruptive variables and flare stars. Extrinsic variables are grouped into eclipsing binary stars and chromospheri- cal stars. Pulsating variables can again classified into Cepheid, RR Lyrae, RV Tauri, Delta Scuti, Mira etc. The eruptive or cataclysmic variables are novae, supernovae, etc., which rarely occurs and are not periodic phenomena. Most of the other variations are periodic in nature. Variable stars can be observed through many ways such as photometry, spectrophotometry and spectroscopy. The sequence of photometric observa- xiv tions on variable stars produces time series data, which contains time, magni- tude and error. The plot between variable star’s apparent magnitude and time are known as light curve. If the time series data is folded on a period, the plot between apparent magnitude and phase is known as phased light curve. The unique shape of phased light curve is a characteristic of each type of variable star. One way to identify the type of variable star and to classify them is by visually looking at the phased light curve by an expert. For last several years, automated algorithms are used to classify a group of variable stars, with the help of computers. Research on variable stars can be divided into different stages like observa- tion, data reduction, data analysis, modeling and classification. The modeling on variable stars helps to determine the short-term and long-term behaviour and to construct theoretical models (for eg:- Wilson-Devinney model for eclips- ing binaries) and to derive stellar properties like mass, radius, luminosity, tem- perature, internal and external structure, chemical composition and evolution. The classification requires the determination of the basic parameters like pe- riod, amplitude and phase and also some other derived parameters. Out of these, period is the most important parameter since the wrong periods can lead to sparse light curves and misleading information. Time series analysis is a method of applying mathematical and statistical tests to data, to quantify the variation, understand the nature of time-varying phenomena, to gain physical understanding of the system and to predict future behavior of the system. Astronomical time series usually suffer from unevenly spaced time instants, varying error conditions and possibility of big gaps. This is due to daily varying daylight and the weather conditions for ground based observations and observations from space may suffer from the impact of cosmic ray particles. Many large scale astronomical surveys such as MACHO, OGLE, EROS, xv ROTSE, PLANET, Hipparcos, MISAO, NSVS, ASAS, Pan-STARRS, Ke- pler,ESA, Gaia, LSST, CRTS provide variable star’s time series data, even though their primary intention is not variable star observation. Center for Astrostatistics, Pennsylvania State University is established to help the astro- nomical community with the aid of statistical tools for harvesting and analysing archival data. Most of these surveys releases the data to the public for further analysis. There exist many period search algorithms through astronomical time se- ries analysis, which can be classified into parametric (assume some underlying distribution for data) and non-parametric (do not assume any statistical model like Gaussian etc.,) methods. Many of the parametric methods are based on variations of discrete Fourier transforms like Generalised Lomb-Scargle peri- odogram (GLSP) by Zechmeister(2009), Significant Spectrum (SigSpec) by Reegen(2007) etc. Non-parametric methods include Phase Dispersion Minimi- sation (PDM) by Stellingwerf(1978) and Cubic spline method by Akerlof(1994) etc. Even though most of the methods can be brought under automation, any of the method stated above could not fully recover the true periods. The wrong detection of period can be due to several reasons such as power leakage to other frequencies which is due to finite total interval, finite sampling interval and finite amount of data. Another problem is aliasing, which is due to the influence of regular sampling. Also spurious periods appear due to long gaps and power flow to harmonic frequencies is an inherent problem of Fourier methods. Hence obtaining the exact period of variable star from it’s time series data is still a difficult problem, in case of huge databases, when subjected to automation. As Matthew Templeton, AAVSO, states “Variable star data analysis is not always straightforward; large-scale, automated analysis design is non-trivial”. Derekas et al. 2007, Deb et.al. 2010 states “The processing of xvi huge amount of data in these databases is quite challenging, even when looking at seemingly small issues such as period determination and classification”. It will be beneficial for the variable star astronomical community, if basic parameters, such as period, amplitude and phase are obtained more accurately, when huge time series databases are subjected to automation. In the present thesis work, the theories of four popular period search methods are studied, the strength and weakness of these methods are evaluated by applying it on two survey databases and finally a modified form of cubic spline method is intro- duced to confirm the exact period of variable star. For the classification of new variable stars discovered and entering them in the “General Catalogue of Vari- able Stars” or other databases like “Variable Star Index“, the characteristics of the variability has to be quantified in term of variable star parameters.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Geographic Information Systems are developed to handle enormous volumes of data and are equipped with numerous functionalities intended to capture, store, edit, organise, process and analyse or represent the geographically referenced information. On the other hand, industrial simulators for driver training are real-time applications that require a virtual environment, either geospecific, geogeneric or a combination of the two, over which the simulation programs will be run. In the final instance, this environment constitutes a geographic location with its specific characteristics of geometry, appearance, functionality, topography, etc. The set of elements that enables the virtual simulation environment to be created and in which the simulator user can move, is usually called the Visual Database (VDB). The main idea behind the work being developed approaches a topic that is of major interest in the field of industrial training simulators, which is the problem of analysing, structuring and describing the virtual environments to be used in large driving simulators. This paper sets out a methodology that uses the capabilities and benefits of Geographic Information Systems for organising, optimising and managing the visual Database of the simulator and for generally enhancing the quality and performance of the simulator.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This research was conducted at the Space Research and Technology Centre o the European Space Agency at Noordvijk in the Netherlands. ESA is an international organisation that brings together a range of scientists, engineers and managers from 14 European member states. The motivation for the work was to enable decision-makers, in a culturally and technologically diverse organisation, to share information for the purpose of making decisions that are well informed about the risk-related aspects of the situations they seek to address. The research examined the use of decision support system DSS) technology to facilitate decision-making of this type. This involved identifying the technology available and its application to risk management. Decision-making is a complex activity that does not lend itself to exact measurement or precise understanding at a detailed level. In view of this, a prototype DSS was developed through which to understand the practical issues to be accommodated and to evaluate alternative approaches to supporting decision-making of this type. The problem of measuring the effect upon the quality of decisions has been approached through expert evaluation of the software developed. The practical orientation of this work was informed by a review of the relevant literature in decision-making, risk management, decision support and information technology. Communication and information technology unite the major the,es of this work. This allows correlation of the interests of the research with European public policy. The principles of communication were also considered in the topic of information visualisation - this emerging technology exploits flexible modes of human computer interaction (HCI) to improve the cognition of complex data. Risk management is itself an area characterised by complexity and risk visualisation is advocated for application in this field of endeavour. The thesis provides recommendations for future work in the fields of decision=making, DSS technology and risk management.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we present an innovative topic segmentation system based on a new informative similarity measure that takes into account word co-occurrence in order to avoid the accessibility to existing linguistic resources such as electronic dictionaries or lexico-semantic databases such as thesauri or ontology. Topic segmentation is the task of breaking documents into topically coherent multi-paragraph subparts. Topic segmentation has extensively been used in information retrieval and text summarization. In particular, our architecture proposes a language-independent topic segmentation system that solves three main problems evidenced by previous research: systems based uniquely on lexical repetition that show reliability problems, systems based on lexical cohesion using existing linguistic resources that are usually available only for dominating languages and as a consequence do not apply to less favored languages and finally systems that need previously existing harvesting training data. For that purpose, we only use statistics on words and sequences of words based on a set of texts. This solution provides a flexible solution that may narrow the gap between dominating languages and less favored languages thus allowing equivalent access to information.