967 resultados para Object oriented database
Resumo:
This paper describes an evaluation framework that allows a standardized and quantitative comparison of IVUS lumen and media segmentation algorithms. This framework has been introduced at the MICCAI 2011 Computing and Visualization for (Intra)Vascular Imaging (CVII) workshop, comparing the results of eight teams that participated. We describe the available data-base comprising of multi-center, multi-vendor and multi-frequency IVUS datasets, their acquisition, the creation of the reference standard and the evaluation measures. The approaches address segmentation of the lumen, the media, or both borders; semi- or fully-automatic operation; and 2-D vs. 3-D methodology. Three performance measures for quantitative analysis have been proposed. The results of the evaluation indicate that segmentation of the vessel lumen and media is possible with an accuracy that is comparable to manual annotation when semi-automatic methods are used, as well as encouraging results can be obtained also in case of fully-automatic segmentation. The analysis performed in this paper also highlights the challenges in IVUS segmentation that remains to be solved.
Resumo:
The purpose of our project is to contribute to earlier diagnosis of AD and better estimates of its severity by using automatic analysis performed through new biomarkers extracted from non-invasive intelligent methods. The methods selected in this case are speech biomarkers oriented to Sponta-neous Speech and Emotional Response Analysis. Thus the main goal of the present work is feature search in Spontaneous Speech oriented to pre-clinical evaluation for the definition of test for AD diagnosis by One-class classifier. One-class classifi-cation problem differs from multi-class classifier in one essen-tial aspect. In one-class classification it is assumed that only information of one of the classes, the target class, is available. In this work we explore the problem of imbalanced datasets that is particularly crucial in applications where the goal is to maximize recognition of the minority class as in medical diag-nosis. The use of information about outlier and Fractal Dimen-sion features improves the system performance.
Resumo:
Tietokantamarkkinointi voi olla vain apuväline markkinointitoimenpiteiden suorittamisessa, mutta se voidaan toisaalta nähdä myös olennaisena osana asiakassuhdehallintaa. Tietokantamarkkinointi asiakassuhdehallinnan näkökulmasta tähtää asiakastyytyväisyyteen ja asiakasuskollisuuteen, sekä asiakassuhteen tuottavuuteen ja kannattavuuteen, mikä voidaan saavuttaa tehokkaan tiedonhallinnan avulla. Tämä mahdollistaa räätälöityjen toimenpiteiden suorittamisen ja tehostaa kohdentamista ja segmentointia, myös asiakkaiden tuottavuuden perusteella. Normatiivinen case-tutkimus, joka tehtiin Alankomaissa Eurooppalaisessa tietotekniikan lisäarvoa tuottavassa jälleenmyyntikanavassa osoittaa, että tietokantamarkkinointi etenkin asiakas- suhdehallinnan näkökulmasta olisi sopiva keino lisätä asiakastyytyväisyyttä ja –tuottavuutta. Se myös tehostaisi sisäisiä tietovirtoja ja markkinointitoimenpiteitä, kuten esimerkiksi markkinointiviestintää, kampanjanhallintaa ja myyntiprosesseja yritysten välisessä kaupankäynnissä.
Resumo:
Polyphenols are a major class of bioactive phytochemicals whose consumption may play a role in the prevention of a number of chronic diseases such as cardiovascular diseases, type II diabetes and cancers. Phenol-Explorer, launched in 2009, is the only freely available web-based database on the content of polyphenols in food and their in vivo metabolism and pharmacokinetics. Here we report the third release of the database (Phenol-Explorer 3.0), which adds data on the effects of food processing on polyphenol contents in foods. Data on >100 foods, covering 161 polyphenols or groups of polyphenols before and after processing, were collected from 129 peer-reviewed publications and entered into new tables linked to the existing relational design. The effect of processing on polyphenol content is expressed in the form of retention factor coefficients, or the proportion of a given polyphenol retained after processing, adjusted for change in water content. The result is the first database on the effects of food processing on polyphenol content and, following the model initially defined for Phenol-Explorer, all data may be traced back to original sources. The new update will allow polyphenol scientists to more accurately estimate polyphenol exposure from dietary surveys. Database URL: http://www.phenol-explorer.eu
Resumo:
BACKGROUND: Physical activity and sedentary behaviour in youth have been reported to vary by sex, age, weight status and country. However, supporting data are often self-reported and/or do not encompass a wide range of ages or geographical locations. This study aimed to describe objectively-measured physical activity and sedentary time patterns in youth. METHODS: The International Children's Accelerometry Database (ICAD) consists of ActiGraph accelerometer data from 20 studies in ten countries, processed using common data reduction procedures. Analyses were conducted on 27,637 participants (2.8-18.4 years) who provided at least three days of valid accelerometer data. Linear regression was used to examine associations between age, sex, weight status, country and physical activity outcomes. RESULTS: Boys were less sedentary and more active than girls at all ages. After 5 years of age there was an average cross-sectional decrease of 4.2 % in total physical activity with each additional year of age, due mainly to lower levels of light-intensity physical activity and greater time spent sedentary. Physical activity did not differ by weight status in the youngest children, but from age seven onwards, overweight/obese participants were less active than their normal weight counterparts. Physical activity varied between samples from different countries, with a 15-20 % difference between the highest and lowest countries at age 9-10 and a 26-28 % difference at age 12-13. CONCLUSIONS: Physical activity differed between samples from different countries, but the associations between demographic characteristics and physical activity were consistently observed. Further research is needed to explore environmental and sociocultural explanations for these differences.
Resumo:
The EpiNet project has been established to facilitate investigator-initiated clinical research in epilepsy, to undertake epidemiological studies, and to simultaneously improve the care of patients who have records created within the EpiNet database. The EpiNet database has recently been adapted to collect detailed information regarding status epilepticus. An incidence study is now underway in Auckland, New Zealand in which the incidence of status epilepticus in the greater Auckland area (population: 1.5 million) will be calculated. The form that has been developed for this study can be used in the future to collect information for randomized controlled trials in status epilepticus. This article is part of a Special Issue entitled "Status Epilepticus".
Resumo:
Tutkimuksessa selvitettiin riskitiedon siirtoon ja hyödyntämiseen käytettyjä työkaluja ja toimintatapoja projektin myynti- ja toteutusvaiheessa. Toisena tavoitteena oli tuoda esille projektiympäristössä riskitiedon tehokkaaseen siirtämiseen ja hyödyntämiseen vaikuttavat seikat. Työssä peilataan kirjallisuudessa esitettyjä riskienhallinnan käytäntöjä yrityksen riskienhallinnan toimintatapoihin ja etsitään sopivia tapoja muokata nykyisiä riskitiedon hyödyntämistapoja tehokkaammaksi. Tutkimus toteutettiin tapaustutkimuksena ja aineistoa kerättiin syventävän haastattelun avulla, päivittämällä riskien tarkastuslista, tutkimalla järjestelmistä saatavia dokumentteja sekä GPSII -kehitysyhteistyöhankkeen tuloksia. Tutkimuksen tuloksena löydettiin riskitiedon hyödyntämiseen liittyviä ongelmia. Lisäksi todettiin selkeä tarve kehittää muutoksenhallintaan liittyvää tietokantajärjestelmää että tarve lisätä riskien tarkastuslistan läpikäyntikertojen määrää ja säännöllisyyttä.
Resumo:
DNA is nowadays swabbed routinely to investigate serious and volume crimes, but research remains scarce when it comes to determining the criteria that may impact the success rate of DNA swabs taken on different surfaces and situations. To investigate these criteria in fully operational conditions, DNA analysis results of 4772 swabs taken by the forensic unit of a police department in Western Switzerland over a 2.5-year period (2012-2014) in volume crime cases were considered. A representative and random sample of 1236 swab analyses was extensively examined and codified, describing several criteria such as whether the swabbing was performed at the scene or in the lab, the zone of the scene where it was performed, the kind of object or surface that was swabbed, whether the target specimen was a touch surface or a biological fluid, and whether the swab targeted a single surface or combined different surfaces. The impact of each criterion and of their combination was assessed in regard to the success rate of DNA analysis, measured through the quality of the resulting profile, and whether the profile resulted in a hit in the national database or not. Results show that some situations - such as swabs taken on door and window handles for instance - have a higher success rate than average swabs. Conversely, other situations lead to a marked decrease in the success rate, which should discourage further analyses of such swabs. Results also confirm that targeting a DNA swab on a single surface is preferable to swabbing different surfaces with the intent to aggregate cells deposited by the offender. Such results assist in predicting the chance that the analysis of a swab taken in a given situation will lead to a positive result. The study could therefore inform an evidence-based approach to decision-making at the crime scene (what to swab or not) and at the triage step (what to analyse or not), contributing thus to save resource and increase the efficiency of forensic science efforts.
Resumo:
To the editor; The Visa Qualifying Examination is a two-day test composed of approximately 950 multiple-choice questions conerneing the basic and clinical sciences....
Resumo:
Current-day web search engines (e.g., Google) do not crawl and index a significant portion of theWeb and, hence, web users relying on search engines only are unable to discover and access a large amount of information from the non-indexable part of the Web. Specifically, dynamic pages generated based on parameters provided by a user via web search forms (or search interfaces) are not indexed by search engines and cannot be found in searchers’ results. Such search interfaces provide web users with an online access to myriads of databases on the Web. In order to obtain some information from a web database of interest, a user issues his/her query by specifying query terms in a search form and receives the query results, a set of dynamic pages that embed required information from a database. At the same time, issuing a query via an arbitrary search interface is an extremely complex task for any kind of automatic agents including web crawlers, which, at least up to the present day, do not even attempt to pass through web forms on a large scale. In this thesis, our primary and key object of study is a huge portion of the Web (hereafter referred as the deep Web) hidden behind web search interfaces. We concentrate on three classes of problems around the deep Web: characterization of deep Web, finding and classifying deep web resources, and querying web databases. Characterizing deep Web: Though the term deep Web was coined in 2000, which is sufficiently long ago for any web-related concept/technology, we still do not know many important characteristics of the deep Web. Another matter of concern is that surveys of the deep Web existing so far are predominantly based on study of deep web sites in English. One can then expect that findings from these surveys may be biased, especially owing to a steady increase in non-English web content. In this way, surveying of national segments of the deep Web is of interest not only to national communities but to the whole web community as well. In this thesis, we propose two new methods for estimating the main parameters of deep Web. We use the suggested methods to estimate the scale of one specific national segment of the Web and report our findings. We also build and make publicly available a dataset describing more than 200 web databases from the national segment of the Web. Finding deep web resources: The deep Web has been growing at a very fast pace. It has been estimated that there are hundred thousands of deep web sites. Due to the huge volume of information in the deep Web, there has been a significant interest to approaches that allow users and computer applications to leverage this information. Most approaches assumed that search interfaces to web databases of interest are already discovered and known to query systems. However, such assumptions do not hold true mostly because of the large scale of the deep Web – indeed, for any given domain of interest there are too many web databases with relevant content. Thus, the ability to locate search interfaces to web databases becomes a key requirement for any application accessing the deep Web. In this thesis, we describe the architecture of the I-Crawler, a system for finding and classifying search interfaces. Specifically, the I-Crawler is intentionally designed to be used in deepWeb characterization studies and for constructing directories of deep web resources. Unlike almost all other approaches to the deep Web existing so far, the I-Crawler is able to recognize and analyze JavaScript-rich and non-HTML searchable forms. Querying web databases: Retrieving information by filling out web search forms is a typical task for a web user. This is all the more so as interfaces of conventional search engines are also web forms. At present, a user needs to manually provide input values to search interfaces and then extract required data from the pages with results. The manual filling out forms is not feasible and cumbersome in cases of complex queries but such kind of queries are essential for many web searches especially in the area of e-commerce. In this way, the automation of querying and retrieving data behind search interfaces is desirable and essential for such tasks as building domain-independent deep web crawlers and automated web agents, searching for domain-specific information (vertical search engines), and for extraction and integration of information from various deep web resources. We present a data model for representing search interfaces and discuss techniques for extracting field labels, client-side scripts and structured data from HTML pages. We also describe a representation of result pages and discuss how to extract and store results of form queries. Besides, we present a user-friendly and expressive form query language that allows one to retrieve information behind search interfaces and extract useful data from the result pages based on specified conditions. We implement a prototype system for querying web databases and describe its architecture and components design.
Resumo:
Contact stains recovered at break-in crime scenes are frequently characterized by mixtures of DNA from several persons. Broad knowledge on the relative contribution of DNA left behind by different users overtime is of paramount importance. Such information might help crime investigators to robustly evaluate the possibility of detecting a specific (or known) individual's DNA profile based on the type and history of an object. To address this issue, a contact stain simulation-based protocol was designed. Fourteen volunteers either acting as first or second object's users were recruited. The first user was required to regularly handle/wear 9 different items during an 8-10-day period, whilst the second user for 5, 30 and 120 min, in three independent simulation sessions producing a total of 231 stains. Subsequently, the relative DNA profile contribution of each individual pair was investigated. Preliminary results showed a progressive increase of the percentage contribution of the second user compared to the first. Interestingly, the second user generally became the major DNA contributor when most objects were handled/worn for 120 min, Furthermore, the observation of unexpected additional alleles will then prompt the investigation of indirect DNA transfer events.
Resumo:
In this thesis I examine Service Oriented Architecture (SOA) considering both its positive and negative qualities for business organizations and IT. In SOA, services are loosely coupled and invoked through standard interfaces to enable business process independence from the underlying technology. As an architecture, SOA brings the key benefit of service reuse that may mean anything from simple application reuse to taking advantage of entire business processes across enterprises. SOA also promises interoperability especially by the Web services standards that enable platform independency. Cost efficiency is mainly a result of the savings in IT maintenance and reduced development costs. The most severe limitations of SOA are performance implications and security issues, but the applicability of SOA is also limited. Additional disadvantages of a service oriented approach include problems in data management and complexity questions, and the lack of agreement about SOA and its twofold nature as a business as well as technology approach leads to problematic interpretation of the available information. In this thesis I find the benefits and limitations of SOA for the purpose described above and propose that companies need to consider the decision to implement SOA carefully to determine whether the benefits will outdo the costs in the individual case.
Resumo:
Humans like some colours and dislike others, but which particular colours and why remains to be understood. Empirical studies on colour preferences generally targeted most preferred colours, but rarely least preferred (disliked) colours. In addition, findings are often based on general colour preferences leaving open the question whether results generalise to specific objects. Here, 88 participants selected the colours they preferred most and least for three context conditions (general, interior walls, t-shirt) using a high-precision colour picker. Participants also indicated whether they associated their colour choice to a valenced object or concept. The chosen colours varied widely between individuals and contexts and so did the reasons for their choices. Consistent patterns also emerged, as most preferred colours in general were more chromatic, while for walls they were lighter and for t-shirts they were darker and less chromatic compared to least preferred colours. This meant that general colour preferences could not explain object specific colour preferences. Measures of the selection process further revealed that, compared to most preferred colours, least preferred colours were chosen more quickly and were less often linked to valenced objects or concepts. The high intra- and inter-individual variability in this and previous reports furthers our understanding that colour preferences are determined by subjective experiences and that most and least preferred colours are not processed equally.
Resumo:
The importance of the regional level in research has risen in the last few decades and a vast literature in the fields of, for instance, evolutionary and institutional economics, network theories, innovations and learning systems, as well as sociology, has focused on regional level questions. Recently the policy makers and regional actors have also began to pay increasing attention to the knowledge economy and its needs, in general, and the connectivity and support structures of regional clusters in particular. Nowadays knowledge is generally considered as the most important source of competitive advantage, but even the most specialised forms of knowledge are becoming a short-lived resource for example due to the accelerating pace of technological change. This emphasizes the need of foresight activities in national, regional and organizational levels and the integration of foresight and innovation activities. In regional setting this development sets great challenges especially in those regions having no university and thus usually very limited resources for research activities. Also the research problem of this dissertation is related to the need to better incorporate the information produced by foresight process to facilitate and to be used in regional practice-based innovation processes. This dissertation is a constructive case study the case being Lahti region and a network facilitating innovation policy adopted in that region. Dissertation consists of a summary and five articles and during the research process a construct or a conceptual model for solving this real life problem has been developed. It is also being implemented as part of the network facilitating innovation policy in the Lahti region.