912 resultados para Database Query
Resumo:
Selostus: Angiotensiini I -muuntavaa entsyymiä estävien peptidien aminohapposekvenssien esiintyminen viljan varastoproteiinien rakenteessa
Resumo:
We use interplanetary transport simulations to compute a database of electron Green's functions, i.e., differential intensities resulting at the spacecraft position from an impulsive injection of energetic (>20 keV) electrons close to the Sun, for a large number of values of two standard interplanetary transport parameters: the scattering mean free path and the solar wind speed. The nominal energy channels of the ACE, STEREO, and Wind spacecraft have been used in the interplanetary transport simulations to conceive a unique tool for the study of near-relativistic electron events observed at 1 AU. In this paper, we quantify the characteristic times of the Green's functions (onset and peak time, rise and decay phase duration) as a function of the interplanetary transport conditions. We use the database to calculate the FWHM of the pitch-angle distributions at different times of the event and under different scattering conditions. This allows us to provide a first quantitative result that can be compared with observations, and to assess the validity of the frequently used term beam-like pitch-angle distribution.
Resumo:
Empower Oy on energia-alan palveluja tarjoava yritys. Energianhallintajärjestelmää käytetään energiatietojen hallintaan ja ylläpitoon sekä tietojen esittämiseen loppukäyttäjille. Palvelun näytöt ja raportit on toteutettu web-pohjaisen käyttöliittymän kautta. Yhtiössä käynnistyi suurprojekti vanhan energianhallintajärjestelmän korvaamiseksi. Vanha järjestelmä otettiin käyttöön vuonna 1995 ja EMS-projekti käynnistettiin vuonna 2001. Diplomityö tehtiin osana EMS-projektia ja työn tavoitteina oli selvittää perusjärjestelmän käyttämän tietokantaratkaisun toimivuutta ja soveltuvuutta tehtävään sekä tutkailla eri tietokantamalleja teoreettisesti. Lisäksi työhön kuului erillisten haku- ja muutoskomponenttien ja rajapintojen toteuttaminen. Näiden avulla voidaan hakea ja muuttaa tietoa perusjärjestelmän pohjalla toimivasta oliorelaatiotietokannasta. Perusjärjestelmän DOR-tietokannaksi (Domain Object Repository) kutsuttu kokonaisuus on olioläheinen tietovarasto, josta tietoa haetaan ilmoittamalla haettavan olion tyyppi ja siihen liitoksissa olevat tyypit. Hakutulokseen mukaan haluttavat ominaisuudet ilmoitetaan kultakin tyypiltä erikseen. Haettaessa ja muutettaessa oliopohjaista DOR-tietoa, tulee noudattaa järjestelmän käyttämiä tietomalleja. Haku- ja muutoskomponentit toteutettiin Microsoftin kehittämällä .NET-teknologialla. Tietokantamallien teoreettinen tarkastelu auttoi ymmärtämään järjestelmän pohjalla toimivaa tietokantaratkaisua. Työssä selvisi, että perusjärjestelmän hyödyntämä oliorelaatiotietokanta soveltuu varsin hyvin tarkoitukseensa. Haku- ja muutoskomponenttien toteutus onnistui ja ne toimivat helppokäyttöisenä rajapintana energianhallintajärjestelmän tietokantaan.
Resumo:
Työ käsittelee multimediatietopankin tietosisällön hallintaa ja kehittämistä. Multimediatietopankki verkossa –projektissa multimediatietopankilla tarkoitetaan vuorovaikutteista ja sisältörikasta liikkuvan ja staattisen kuvan (video, animaatio, valokuvat, 3D, grafiikka), äänten (musiikki ja muut äänet) ja tietokantojen yhdistelmää. Sisällön eri osa-alueet ja vuorovaikutteisuus tukevat kokonaisuutta, jolla on oma viestinnällinen tarkoituksensa. Tätä kokonaistoteutusta levitetään www:n, digitaalitelevision ja mobiililaitteiden välityksellä loppukäyttäjälle. Multimedia- ja matkaviestinteknologioiden nopea kehitys antaa mahdollisuuden kehittää uusia palveluja. Erilaisiin päätelaitteisiin ja vaihteleviin ympäristöihin tarkoitettujen helppokäyttöisten multimedia- ja mobiilipalvelujen kysyntä on jatkuvassa kasvussa. Multimediatietopankkiprojektissa esitetään kuinka multimediapalveluita voidaan toteuttaa integroidussa ympäristössä. Integroidulla ympäristöllä tässä työssä tarkoitetaan Internetin, mobiilien palvelujen, WAP:in, kämmentietokoneen, digitaalisen television sekä uusien multimediakännyköiden käyttöä multimediatietopankin tarjoamien palvelujen välittämisessä. Projekti on jaettu yksittäisiin lukuihin, joissa tarkoituksena on syventää multimediatietopankin yksityiskohtia sisällön tuottamisessa teknologian näkökannalta. Multimediatietokannan toteutuksessa mallinnetaan palvelun sisältö tietokantaan XHTML-muodossa mediaolioiden sisään sekä tallennetaan tietopankin metatietoa multimediarelaatiotietokantaan, josta on mahdollista hakea tietoa minkä tahansa päätelaitteen kyselyjen avulla.Tässä työssä keskitytään multimediatietokannan hallintajärjestelmän tehtäviin ja rakenteeseen, siihen miten multimediadata tallennetaan tietokantaan sekä siihen miten tietokannassa olevaa metatietoa haetaan käyttäen tietokannassa kehitettyjä hakumenetelmiä.
Resumo:
Nokia Push To Talk järjestelmä tarjoaa uuden kommunikointimetodin tavallisen puhelun oheen. Yksi tärkeimmistä uuden järjestelmän ominaisuuksista on puhelunmuodostuksen nopeus. Lisäksi järjestelmän tulee olla telekommunikaatiojärjestelmien yleisten periaatteiden mukainen, mahdollisimman stabiili ja skaalautuva, jotta järjestelmä olisi mahdollisimman vikasietoinen ja laajennettavissa. Diplomityön päätavoite on esitellä "C++"-tietokantakirjastojen suunnittelua ja testausta. Aluksi tutkitaan tietokantajärjestelmien problematiikkaa alkaen tietokantajärjestelmän valinnasta ja huomioiden erityisesti nopeuskriteerit. Sitten esitellään kaksi teknistä toteutusta kahta "C++"-tietokantakirjastoa varten ja pohditaan joitakin vaihtoehtoisia toteutustapoja.
Resumo:
This paper describes an evaluation framework that allows a standardized and quantitative comparison of IVUS lumen and media segmentation algorithms. This framework has been introduced at the MICCAI 2011 Computing and Visualization for (Intra)Vascular Imaging (CVII) workshop, comparing the results of eight teams that participated. We describe the available data-base comprising of multi-center, multi-vendor and multi-frequency IVUS datasets, their acquisition, the creation of the reference standard and the evaluation measures. The approaches address segmentation of the lumen, the media, or both borders; semi- or fully-automatic operation; and 2-D vs. 3-D methodology. Three performance measures for quantitative analysis have been proposed. The results of the evaluation indicate that segmentation of the vessel lumen and media is possible with an accuracy that is comparable to manual annotation when semi-automatic methods are used, as well as encouraging results can be obtained also in case of fully-automatic segmentation. The analysis performed in this paper also highlights the challenges in IVUS segmentation that remains to be solved.
Resumo:
Tietokantamarkkinointi voi olla vain apuväline markkinointitoimenpiteiden suorittamisessa, mutta se voidaan toisaalta nähdä myös olennaisena osana asiakassuhdehallintaa. Tietokantamarkkinointi asiakassuhdehallinnan näkökulmasta tähtää asiakastyytyväisyyteen ja asiakasuskollisuuteen, sekä asiakassuhteen tuottavuuteen ja kannattavuuteen, mikä voidaan saavuttaa tehokkaan tiedonhallinnan avulla. Tämä mahdollistaa räätälöityjen toimenpiteiden suorittamisen ja tehostaa kohdentamista ja segmentointia, myös asiakkaiden tuottavuuden perusteella. Normatiivinen case-tutkimus, joka tehtiin Alankomaissa Eurooppalaisessa tietotekniikan lisäarvoa tuottavassa jälleenmyyntikanavassa osoittaa, että tietokantamarkkinointi etenkin asiakas- suhdehallinnan näkökulmasta olisi sopiva keino lisätä asiakastyytyväisyyttä ja –tuottavuutta. Se myös tehostaisi sisäisiä tietovirtoja ja markkinointitoimenpiteitä, kuten esimerkiksi markkinointiviestintää, kampanjanhallintaa ja myyntiprosesseja yritysten välisessä kaupankäynnissä.
Resumo:
Polyphenols are a major class of bioactive phytochemicals whose consumption may play a role in the prevention of a number of chronic diseases such as cardiovascular diseases, type II diabetes and cancers. Phenol-Explorer, launched in 2009, is the only freely available web-based database on the content of polyphenols in food and their in vivo metabolism and pharmacokinetics. Here we report the third release of the database (Phenol-Explorer 3.0), which adds data on the effects of food processing on polyphenol contents in foods. Data on >100 foods, covering 161 polyphenols or groups of polyphenols before and after processing, were collected from 129 peer-reviewed publications and entered into new tables linked to the existing relational design. The effect of processing on polyphenol content is expressed in the form of retention factor coefficients, or the proportion of a given polyphenol retained after processing, adjusted for change in water content. The result is the first database on the effects of food processing on polyphenol content and, following the model initially defined for Phenol-Explorer, all data may be traced back to original sources. The new update will allow polyphenol scientists to more accurately estimate polyphenol exposure from dietary surveys. Database URL: http://www.phenol-explorer.eu
Resumo:
BACKGROUND: Physical activity and sedentary behaviour in youth have been reported to vary by sex, age, weight status and country. However, supporting data are often self-reported and/or do not encompass a wide range of ages or geographical locations. This study aimed to describe objectively-measured physical activity and sedentary time patterns in youth. METHODS: The International Children's Accelerometry Database (ICAD) consists of ActiGraph accelerometer data from 20 studies in ten countries, processed using common data reduction procedures. Analyses were conducted on 27,637 participants (2.8-18.4 years) who provided at least three days of valid accelerometer data. Linear regression was used to examine associations between age, sex, weight status, country and physical activity outcomes. RESULTS: Boys were less sedentary and more active than girls at all ages. After 5 years of age there was an average cross-sectional decrease of 4.2 % in total physical activity with each additional year of age, due mainly to lower levels of light-intensity physical activity and greater time spent sedentary. Physical activity did not differ by weight status in the youngest children, but from age seven onwards, overweight/obese participants were less active than their normal weight counterparts. Physical activity varied between samples from different countries, with a 15-20 % difference between the highest and lowest countries at age 9-10 and a 26-28 % difference at age 12-13. CONCLUSIONS: Physical activity differed between samples from different countries, but the associations between demographic characteristics and physical activity were consistently observed. Further research is needed to explore environmental and sociocultural explanations for these differences.
Resumo:
The EpiNet project has been established to facilitate investigator-initiated clinical research in epilepsy, to undertake epidemiological studies, and to simultaneously improve the care of patients who have records created within the EpiNet database. The EpiNet database has recently been adapted to collect detailed information regarding status epilepticus. An incidence study is now underway in Auckland, New Zealand in which the incidence of status epilepticus in the greater Auckland area (population: 1.5 million) will be calculated. The form that has been developed for this study can be used in the future to collect information for randomized controlled trials in status epilepticus. This article is part of a Special Issue entitled "Status Epilepticus".
Resumo:
Current-day web search engines (e.g., Google) do not crawl and index a significant portion of theWeb and, hence, web users relying on search engines only are unable to discover and access a large amount of information from the non-indexable part of the Web. Specifically, dynamic pages generated based on parameters provided by a user via web search forms (or search interfaces) are not indexed by search engines and cannot be found in searchers’ results. Such search interfaces provide web users with an online access to myriads of databases on the Web. In order to obtain some information from a web database of interest, a user issues his/her query by specifying query terms in a search form and receives the query results, a set of dynamic pages that embed required information from a database. At the same time, issuing a query via an arbitrary search interface is an extremely complex task for any kind of automatic agents including web crawlers, which, at least up to the present day, do not even attempt to pass through web forms on a large scale. In this thesis, our primary and key object of study is a huge portion of the Web (hereafter referred as the deep Web) hidden behind web search interfaces. We concentrate on three classes of problems around the deep Web: characterization of deep Web, finding and classifying deep web resources, and querying web databases. Characterizing deep Web: Though the term deep Web was coined in 2000, which is sufficiently long ago for any web-related concept/technology, we still do not know many important characteristics of the deep Web. Another matter of concern is that surveys of the deep Web existing so far are predominantly based on study of deep web sites in English. One can then expect that findings from these surveys may be biased, especially owing to a steady increase in non-English web content. In this way, surveying of national segments of the deep Web is of interest not only to national communities but to the whole web community as well. In this thesis, we propose two new methods for estimating the main parameters of deep Web. We use the suggested methods to estimate the scale of one specific national segment of the Web and report our findings. We also build and make publicly available a dataset describing more than 200 web databases from the national segment of the Web. Finding deep web resources: The deep Web has been growing at a very fast pace. It has been estimated that there are hundred thousands of deep web sites. Due to the huge volume of information in the deep Web, there has been a significant interest to approaches that allow users and computer applications to leverage this information. Most approaches assumed that search interfaces to web databases of interest are already discovered and known to query systems. However, such assumptions do not hold true mostly because of the large scale of the deep Web – indeed, for any given domain of interest there are too many web databases with relevant content. Thus, the ability to locate search interfaces to web databases becomes a key requirement for any application accessing the deep Web. In this thesis, we describe the architecture of the I-Crawler, a system for finding and classifying search interfaces. Specifically, the I-Crawler is intentionally designed to be used in deepWeb characterization studies and for constructing directories of deep web resources. Unlike almost all other approaches to the deep Web existing so far, the I-Crawler is able to recognize and analyze JavaScript-rich and non-HTML searchable forms. Querying web databases: Retrieving information by filling out web search forms is a typical task for a web user. This is all the more so as interfaces of conventional search engines are also web forms. At present, a user needs to manually provide input values to search interfaces and then extract required data from the pages with results. The manual filling out forms is not feasible and cumbersome in cases of complex queries but such kind of queries are essential for many web searches especially in the area of e-commerce. In this way, the automation of querying and retrieving data behind search interfaces is desirable and essential for such tasks as building domain-independent deep web crawlers and automated web agents, searching for domain-specific information (vertical search engines), and for extraction and integration of information from various deep web resources. We present a data model for representing search interfaces and discuss techniques for extracting field labels, client-side scripts and structured data from HTML pages. We also describe a representation of result pages and discuss how to extract and store results of form queries. Besides, we present a user-friendly and expressive form query language that allows one to retrieve information behind search interfaces and extract useful data from the result pages based on specified conditions. We implement a prototype system for querying web databases and describe its architecture and components design.
Resumo:
This paper describes Question Waves, an algorithm that can be applied to social search protocols, such as Asknext or Sixearch. In this model, the queries are propagated through the social network, with faster propagation through more trustable acquaintances. Question Waves uses local information to make decisions and obtain an answer ranking. With Question Waves, the answers that arrive first are the most likely to be relevant, and we computed the correlation of answer relevance with the order of arrival to demonstrate this result. We obtained correlations equivalent to the heuristics that use global knowledge, such as profile similarity among users or the expertise value of an agent. Because Question Waves is compatible with the social search protocol Asknext, it is possible to stop a search when enough relevant answers have been found; additionally, stopping the search early only introduces a minimal risk of not obtaining the best possible answer. Furthermore, Question Waves does not require a re-ranking algorithm because the results arrive sorted
Resumo:
Background Computerised databases of primary care clinical records are widely used for epidemiological research. In Catalonia, the InformationSystem for the Development of Research in Primary Care (SIDIAP) aims to promote the development of research based on high-quality validated data from primary care electronic medical records. Objective The purpose of this study is to create and validate a scoring system (Registry Quality Score, RQS) that will enable all primary care practices (PCPs) to be selected as providers of researchusable data based on the completeness of their registers. Methods Diseases that were likely to be representative of common diagnoses seen in primary care were selected for RQS calculations. The observed/ expected cases ratio was calculated for each disease. Once we had obtained an estimated value for this ratio for each of the selected conditions we added up the ratios calculated for each condition to obtain a final RQS. Rate comparisons between observed and published prevalences of diseases not included in the RQS calculations (atrial fibrillation, diabetes, obesity, schizophrenia, stroke, urinary incontinenceand Crohn’s disease) were used to set the RQS cutoff which will enable researchers to select PCPs with research-usable data. Results Apart from Crohn’s disease, all prevalences were the same as those published from the RQS fourth quintile (60th percentile) onwards. This RQS cut-off provided a total population of 1 936 443 (39.6% of the total SIDIAP population). Conclusions SIDIAP is highly representative of the population of Catalonia in terms of geographical, age and sex distributions. We report the usefulness of rate comparison as a valid method to establish research-usable data within primary care electronic medical records
Resumo:
Based on a specially created mass spectral database utilizing 23 tetradecenyl and 22 hexadecenyl acetate standards along with Kóvats retention indices obtained on a very polar stationary phase [poly (biscyanopropyl siloxane)] (SP 2340), (Z)-9-hexadecenyl acetate, (Z)-11-hexadecenyl acetate and (E)-8-hexadecenyl acetate were identified in active pheromone extracts of Elasmopalpus lignosellus. This identification was more efficient than our previous study using gas chromatography/mass spectrometry with a dimethyl disulfide derivative where we could only identify the first two acetates. The acetate composition of the pheromone gland differed from region to region in Brazil and from that from the Tifton (GA, USA) population, suggesting polymorphism or a different sub-species.
Resumo:
The aromatic flora of the Amazon has been inventoried for 30 years. In this sense, were made over 500 field trips to collect over 2500 plants and to obtain more than 2000 essential oils and aroma concentrates, all of them submitted to GC and GC-MS. This work led to the creation of a database for the aromatic plants of the Amazon, which catalogs general information about 1250 specimens. The database has allowed the publication of the chemical composition of the oils and aromas of more than 350 species, associated with a larger number of chemical types. The essential oils of many species offer optimum conditions for economic exploitation and use in national and international market of fragrances, cosmetics, agricultural and household pesticides.