864 resultados para Information search – models


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Globaalinen liiketoimintaympäristö on muutoksessa. Uudet teknologiat muuttavat toimintaympäristöä ja talouden säännöt muuttuvat nopeasti. Uusia liiketoimintamalleja tarvitaan. Tutkimuksen tavoitteena oli analysoida tieto- ja viestintäteollisuuden (ICT-teollisuus) nykytilannetta strategisesta ja kilpailuanalyyttisestä näkökulmasta, sekä luoda kuva ICT-teollisuudesta ja sen suurista pelureista Euroopassa ja USA:ssa. Tutkimus analysoi viittä suurta ICT-alan yritystä. Tutkimus oli luonteeltaan sekä kvalitatiivinen että kvantitatiivinen. Yrityksiä analysoitiin käyttäen numeerista ja laadullista materiaalia. Tutkimus perustui kirjallisuuteen, artikkeleihin, tutkimusraportteihin, yritysten internet-kotisivuihin ja vuosikertomuksiin. Tutkimuksen tuloksena voitiin löytää sekä yhtäläisyyksiä että eroavaisuuksia yritysten liiketoimintamallien ja taloudellisen menestymisen väliltä.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We consider the numerical treatment of the optical flow problem by evaluating the performance of the trust region method versus the line search method. To the best of our knowledge, the trust region method is studied here for the first time for variational optical flow computation. Four different optical flow models are used to test the performance of the proposed algorithm combining linear and nonlinear data terms with quadratic and TV regularization. We show that trust region often performs better than line search; especially in the presence of non-linearity and non-convexity in the model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Notre consommation en eau souterraine, en particulier comme eau potable ou pour l'irrigation, a considérablement augmenté au cours des années. De nombreux problèmes font alors leur apparition, allant de la prospection de nouvelles ressources à la remédiation des aquifères pollués. Indépendamment du problème hydrogéologique considéré, le principal défi reste la caractérisation des propriétés du sous-sol. Une approche stochastique est alors nécessaire afin de représenter cette incertitude en considérant de multiples scénarios géologiques et en générant un grand nombre de réalisations géostatistiques. Nous rencontrons alors la principale limitation de ces approches qui est le coût de calcul dû à la simulation des processus d'écoulements complexes pour chacune de ces réalisations. Dans la première partie de la thèse, ce problème est investigué dans le contexte de propagation de l'incertitude, oú un ensemble de réalisations est identifié comme représentant les propriétés du sous-sol. Afin de propager cette incertitude à la quantité d'intérêt tout en limitant le coût de calcul, les méthodes actuelles font appel à des modèles d'écoulement approximés. Cela permet l'identification d'un sous-ensemble de réalisations représentant la variabilité de l'ensemble initial. Le modèle complexe d'écoulement est alors évalué uniquement pour ce sousensemble, et, sur la base de ces réponses complexes, l'inférence est faite. Notre objectif est d'améliorer la performance de cette approche en utilisant toute l'information à disposition. Pour cela, le sous-ensemble de réponses approximées et exactes est utilisé afin de construire un modèle d'erreur, qui sert ensuite à corriger le reste des réponses approximées et prédire la réponse du modèle complexe. Cette méthode permet de maximiser l'utilisation de l'information à disposition sans augmentation perceptible du temps de calcul. La propagation de l'incertitude est alors plus précise et plus robuste. La stratégie explorée dans le premier chapitre consiste à apprendre d'un sous-ensemble de réalisations la relation entre les modèles d'écoulement approximé et complexe. Dans la seconde partie de la thèse, cette méthodologie est formalisée mathématiquement en introduisant un modèle de régression entre les réponses fonctionnelles. Comme ce problème est mal posé, il est nécessaire d'en réduire la dimensionnalité. Dans cette optique, l'innovation du travail présenté provient de l'utilisation de l'analyse en composantes principales fonctionnelles (ACPF), qui non seulement effectue la réduction de dimensionnalités tout en maximisant l'information retenue, mais permet aussi de diagnostiquer la qualité du modèle d'erreur dans cet espace fonctionnel. La méthodologie proposée est appliquée à un problème de pollution par une phase liquide nonaqueuse et les résultats obtenus montrent que le modèle d'erreur permet une forte réduction du temps de calcul tout en estimant correctement l'incertitude. De plus, pour chaque réponse approximée, une prédiction de la réponse complexe est fournie par le modèle d'erreur. Le concept de modèle d'erreur fonctionnel est donc pertinent pour la propagation de l'incertitude, mais aussi pour les problèmes d'inférence bayésienne. Les méthodes de Monte Carlo par chaîne de Markov (MCMC) sont les algorithmes les plus communément utilisés afin de générer des réalisations géostatistiques en accord avec les observations. Cependant, ces méthodes souffrent d'un taux d'acceptation très bas pour les problèmes de grande dimensionnalité, résultant en un grand nombre de simulations d'écoulement gaspillées. Une approche en deux temps, le "MCMC en deux étapes", a été introduite afin d'éviter les simulations du modèle complexe inutiles par une évaluation préliminaire de la réalisation. Dans la troisième partie de la thèse, le modèle d'écoulement approximé couplé à un modèle d'erreur sert d'évaluation préliminaire pour le "MCMC en deux étapes". Nous démontrons une augmentation du taux d'acceptation par un facteur de 1.5 à 3 en comparaison avec une implémentation classique de MCMC. Une question reste sans réponse : comment choisir la taille de l'ensemble d'entrainement et comment identifier les réalisations permettant d'optimiser la construction du modèle d'erreur. Cela requiert une stratégie itérative afin que, à chaque nouvelle simulation d'écoulement, le modèle d'erreur soit amélioré en incorporant les nouvelles informations. Ceci est développé dans la quatrième partie de la thèse, oú cette méthodologie est appliquée à un problème d'intrusion saline dans un aquifère côtier. -- Our consumption of groundwater, in particular as drinking water and for irrigation, has considerably increased over the years and groundwater is becoming an increasingly scarce and endangered resource. Nofadays, we are facing many problems ranging from water prospection to sustainable management and remediation of polluted aquifers. Independently of the hydrogeological problem, the main challenge remains dealing with the incomplete knofledge of the underground properties. Stochastic approaches have been developed to represent this uncertainty by considering multiple geological scenarios and generating a large number of realizations. The main limitation of this approach is the computational cost associated with performing complex of simulations in each realization. In the first part of the thesis, we explore this issue in the context of uncertainty propagation, where an ensemble of geostatistical realizations is identified as representative of the subsurface uncertainty. To propagate this lack of knofledge to the quantity of interest (e.g., the concentration of pollutant in extracted water), it is necessary to evaluate the of response of each realization. Due to computational constraints, state-of-the-art methods make use of approximate of simulation, to identify a subset of realizations that represents the variability of the ensemble. The complex and computationally heavy of model is then run for this subset based on which inference is made. Our objective is to increase the performance of this approach by using all of the available information and not solely the subset of exact responses. Two error models are proposed to correct the approximate responses follofing a machine learning approach. For the subset identified by a classical approach (here the distance kernel method) both the approximate and the exact responses are knofn. This information is used to construct an error model and correct the ensemble of approximate responses to predict the "expected" responses of the exact model. The proposed methodology makes use of all the available information without perceptible additional computational costs and leads to an increase in accuracy and robustness of the uncertainty propagation. The strategy explored in the first chapter consists in learning from a subset of realizations the relationship between proxy and exact curves. In the second part of this thesis, the strategy is formalized in a rigorous mathematical framework by defining a regression model between functions. As this problem is ill-posed, it is necessary to reduce its dimensionality. The novelty of the work comes from the use of functional principal component analysis (FPCA), which not only performs the dimensionality reduction while maximizing the retained information, but also allofs a diagnostic of the quality of the error model in the functional space. The proposed methodology is applied to a pollution problem by a non-aqueous phase-liquid. The error model allofs a strong reduction of the computational cost while providing a good estimate of the uncertainty. The individual correction of the proxy response by the error model leads to an excellent prediction of the exact response, opening the door to many applications. The concept of functional error model is useful not only in the context of uncertainty propagation, but also, and maybe even more so, to perform Bayesian inference. Monte Carlo Markov Chain (MCMC) algorithms are the most common choice to ensure that the generated realizations are sampled in accordance with the observations. Hofever, this approach suffers from lof acceptance rate in high dimensional problems, resulting in a large number of wasted of simulations. This led to the introduction of two-stage MCMC, where the computational cost is decreased by avoiding unnecessary simulation of the exact of thanks to a preliminary evaluation of the proposal. In the third part of the thesis, a proxy is coupled to an error model to provide an approximate response for the two-stage MCMC set-up. We demonstrate an increase in acceptance rate by a factor three with respect to one-stage MCMC results. An open question remains: hof do we choose the size of the learning set and identify the realizations to optimize the construction of the error model. This requires devising an iterative strategy to construct the error model, such that, as new of simulations are performed, the error model is iteratively improved by incorporating the new information. This is discussed in the fourth part of the thesis, in which we apply this methodology to a problem of saline intrusion in a coastal aquifer.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objectives: The purpose of this study is to determine the possible differences in deflection between two needles of same length and external gauge but with different internal gauges during truncal block of the inferior alveolar nerve. The initial working hypothesis was that greater deflection may be expected with larger internal gauge needles. Study design: Four clinicians subjected 346 patients to inferior alveolar nerve block and infiltrating anesthesia of the buccal nerve trajectory for the surgical or conventional extraction of the lower third molar. A nonautoaspirating syringe system with 2 types of needle was used: a standard 27-gauge x 35-mm needle with an internal gauge of 0.215 mm or an XL Monoprotect® 27-gauge x 35-mm needle with an internal gauge of 0.265 mm. The following information was systematically recorded for each patient: needle type, gender, anesthetic technique (direct or indirect truncal block) and the number of bone contacts during the procedure, the patient-extraction side, the practitioner performing the technique, and blood aspiration (either positive or negative). Results: 346 needles were used in total. 190 were standard needles (27-gauge x 35-mm needle with an internal gauge of 0.215 mm) and 156 were XL Monoprotect®. Incidence of deflection was observed in 49.1% of cases (170 needles) where 94 were standard needles and 76 XL Monoprotect®. Needle torsion ranged from 0º and 6º. Conclusions: No significant differences were recorded in terms of deflection and internal gauge, operator, patient-extraction side, the anesthetic technique involved and the number of bone contacts during the procedure

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The goal of the study was to find a proper frame to understand business models and study business models of the chosen companies in packaging machinery manufacturing. Good practices and tips are searched from business models which have helped companies to success. Packaging industry’s future is also examined in front of different kinds of changes and the influence which they have on machinery manufacturer’s business models. In the theory part business models’ history and the best frame suitable for this study are presented. The chosen case companies have been discussed according to the frame, and they have been compared to each other to point out the differences. The good practices noticed in companies and according to information from other sources, new business model has been constructed including things that should be noticed while constructing a new business model. The information sources of this study where interviews, annual reports, companies presentations and web pages. The type of study was an interpretative case study.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Current-day web search engines (e.g., Google) do not crawl and index a significant portion of theWeb and, hence, web users relying on search engines only are unable to discover and access a large amount of information from the non-indexable part of the Web. Specifically, dynamic pages generated based on parameters provided by a user via web search forms (or search interfaces) are not indexed by search engines and cannot be found in searchers’ results. Such search interfaces provide web users with an online access to myriads of databases on the Web. In order to obtain some information from a web database of interest, a user issues his/her query by specifying query terms in a search form and receives the query results, a set of dynamic pages that embed required information from a database. At the same time, issuing a query via an arbitrary search interface is an extremely complex task for any kind of automatic agents including web crawlers, which, at least up to the present day, do not even attempt to pass through web forms on a large scale. In this thesis, our primary and key object of study is a huge portion of the Web (hereafter referred as the deep Web) hidden behind web search interfaces. We concentrate on three classes of problems around the deep Web: characterization of deep Web, finding and classifying deep web resources, and querying web databases. Characterizing deep Web: Though the term deep Web was coined in 2000, which is sufficiently long ago for any web-related concept/technology, we still do not know many important characteristics of the deep Web. Another matter of concern is that surveys of the deep Web existing so far are predominantly based on study of deep web sites in English. One can then expect that findings from these surveys may be biased, especially owing to a steady increase in non-English web content. In this way, surveying of national segments of the deep Web is of interest not only to national communities but to the whole web community as well. In this thesis, we propose two new methods for estimating the main parameters of deep Web. We use the suggested methods to estimate the scale of one specific national segment of the Web and report our findings. We also build and make publicly available a dataset describing more than 200 web databases from the national segment of the Web. Finding deep web resources: The deep Web has been growing at a very fast pace. It has been estimated that there are hundred thousands of deep web sites. Due to the huge volume of information in the deep Web, there has been a significant interest to approaches that allow users and computer applications to leverage this information. Most approaches assumed that search interfaces to web databases of interest are already discovered and known to query systems. However, such assumptions do not hold true mostly because of the large scale of the deep Web indeed, for any given domain of interest there are too many web databases with relevant content. Thus, the ability to locate search interfaces to web databases becomes a key requirement for any application accessing the deep Web. In this thesis, we describe the architecture of the I-Crawler, a system for finding and classifying search interfaces. Specifically, the I-Crawler is intentionally designed to be used in deepWeb characterization studies and for constructing directories of deep web resources. Unlike almost all other approaches to the deep Web existing so far, the I-Crawler is able to recognize and analyze JavaScript-rich and non-HTML searchable forms. Querying web databases: Retrieving information by filling out web search forms is a typical task for a web user. This is all the more so as interfaces of conventional search engines are also web forms. At present, a user needs to manually provide input values to search interfaces and then extract required data from the pages with results. The manual filling out forms is not feasible and cumbersome in cases of complex queries but such kind of queries are essential for many web searches especially in the area of e-commerce. In this way, the automation of querying and retrieving data behind search interfaces is desirable and essential for such tasks as building domain-independent deep web crawlers and automated web agents, searching for domain-specific information (vertical search engines), and for extraction and integration of information from various deep web resources. We present a data model for representing search interfaces and discuss techniques for extracting field labels, client-side scripts and structured data from HTML pages. We also describe a representation of result pages and discuss how to extract and store results of form queries. Besides, we present a user-friendly and expressive form query language that allows one to retrieve information behind search interfaces and extract useful data from the result pages based on specified conditions. We implement a prototype system for querying web databases and describe its architecture and components design.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Drying is a major step in the manufacturing process in pharmaceutical industries, and the selection of dryer and operating conditions are sometimes a bottleneck. In spite of difficulties, the bottlenecks are taken care of with utmost care due to good manufacturing practices (GMP) and industries' image in the global market. The purpose of this work is to research the use of existing knowledge for the selection of dryer and its operating conditions for drying of pharmaceutical materials with the help of methods like case-based reasoning and decision trees to reduce time and expenditure for research. The work consisted of two major parts as follows: Literature survey on the theories of spray dying, case-based reasoning and decision trees; working part includes data acquisition and testing of the models based on existing and upgraded data. Testing resulted in a combination of two models, case-based reasoning and decision trees, leading to more specific results when compared to conventional methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In vivo (1)H MR spectroscopy allows the non invasive characterization of brain metabolites and it has been used for studying brain metabolic changes in a wide range of neurodegenerative diseases. The prion diseases form a group of fatal neurodegenerative diseases, also described as transmissible spongiform encephalopathies. The mechanism by which prions elicit brain damage remains unclear and therefore different transgenic mouse models of prion disease were created. We performed an in vivo longitudinal (1)H MR spectroscopy study at 14.1 T with the aim to measure the neurochemical profile of Prnp -/- and PrPΔ32-121 mice in the hippocampus and cerebellum. Using high-field MR spectroscopy we were able to analyze in details the in vivo brain metabolites in Prnp -/- and PrPΔ32-121 mice. An increase of myo-inositol, glutamate and lactate concentrations with a decrease of N-acetylaspartate concentrations were observed providing additional information to the previous measurements.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper describes Question Waves, an algorithm that can be applied to social search protocols, such as Asknext or Sixearch. In this model, the queries are propagated through the social network, with faster propagation through more trustable acquaintances. Question Waves uses local information to make decisions and obtain an answer ranking. With Question Waves, the answers that arrive first are the most likely to be relevant, and we computed the correlation of answer relevance with the order of arrival to demonstrate this result. We obtained correlations equivalent to the heuristics that use global knowledge, such as profile similarity among users or the expertise value of an agent. Because Question Waves is compatible with the social search protocol Asknext, it is possible to stop a search when enough relevant answers have been found; additionally, stopping the search early only introduces a minimal risk of not obtaining the best possible answer. Furthermore, Question Waves does not require a re-ranking algorithm because the results arrive sorted

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objective: We used demographic and clinical data to design practical classification models for prediction of neurocognitive impairment (NCI) in people with HIV infection. Methods: The study population comprised 331 HIV-infected patients with available demographic, clinical, and neurocognitive data collected using a comprehensive battery of neuropsychological tests. Classification and regression trees (CART) were developed to btain detailed and reliable models to predict NCI. Following a practical clinical approach, NCI was considered the main variable for study outcomes, and analyses were performed separately in treatment-naïve and treatment-experienced patients. Results: The study sample comprised 52 treatment-naïve and 279 experienced patients. In the first group, the variables identified as better predictors of NCI were CD4 cell count and age (correct classification [CC]: 79.6%, 3 final nodes). In treatment-experienced patients, the variables most closely related to NCI were years of education, nadir CD4 cell count, central nervous system penetration-effectiveness score, age, employment status, and confounding comorbidities (CC: 82.1%, 7 final nodes). In patients with an undetectable viral load and no comorbidities, we obtained a fairly accurate model in which the main variables were nadir CD4 cell count, current CD4 cell count, time on current treatment, and past highest viral load (CC: 88%, 6 final nodes). Conclusion: Practical classification models to predict NCI in HIV infection can be obtained using demographic and clinical variables. An approach based on CART analyses may facilitate screening for HIV-associated neurocognitive disorders and complement clinical information about risk and protective factors for NCI in HIV-infected patients.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Due to the rise of criminal, civil and administrative judicial situations involving people lacking valid identity documents, age estimation of living persons has become an important operational procedure for numerous forensic and medicolegal services worldwide. The chronological age of a given person is generally estimated from the observed degree of maturity of some selected physical attributes by means of statistical methods. However, their application in the forensic framework suffers from some conceptual and practical drawbacks, as recently claimed in the specialised literature. The aim of this paper is therefore to offer an alternative solution for overcoming these limits, by reiterating the utility of a probabilistic Bayesian approach for age estimation. This approach allows one to deal in a transparent way with the uncertainty surrounding the age estimation process and to produce all the relevant information in the form of posterior probability distribution about the chronological age of the person under investigation. Furthermore, this probability distribution can also be used for evaluating in a coherent way the possibility that the examined individual is younger or older than a given legal age threshold having a particular legal interest. The main novelty introduced by this work is the development of a probabilistic graphical model, i.e. a Bayesian network, for dealing with the problem at hand. The use of this kind of probabilistic tool can significantly facilitate the application of the proposed methodology: examples are presented based on data related to the ossification status of the medial clavicular epiphysis. The reliability and the advantages of this probabilistic tool are presented and discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The present study was performed in an attempt to develop an in vitro integrated testing strategy (ITS) to evaluate drug-induced neurotoxicity. A number of endpoints were analyzed using two complementary brain cell culture models and an in vitro blood-brain barrier (BBB) model after single and repeated exposure treatments with selected drugs that covered the major biological, pharmacological and neuro-toxicological responses. Furthermore, four drugs (diazepam, cyclosporine A, chlorpromazine and amiodarone) were tested more in depth as representatives of different classes of neurotoxicants, inducing toxicity through different pathways of toxicity. The developed in vitro BBB model allowed detection of toxic effects at the level of BBB and evaluation of drug transport through the barrier for predicting free brain concentrations of the studied drugs. The measurement of neuronal electrical activity was found to be a sensitive tool to predict the neuroactivity and neurotoxicity of drugs after acute exposure. The histotypic 3D re-aggregating brain cell cultures, containing all brain cell types, were found to be well suited for OMICs analyses after both acute and long term treatment. The obtained data suggest that an in vitro ITS based on the information obtained from BBB studies and combined with metabolomics, proteomics and neuronal electrical activity measurements performed in stable in vitro neuronal cell culture systems, has high potential to improve current in vitro drug-induced neurotoxicity evaluation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many educators and educational institutions have yet to integrate web-based practices into their classrooms and curricula. As a result, it can be difficult to prototype and evaluate approaches to transforming classrooms from static endpoints to dynamic, content-creating nodes in the online information ecosystem. But many scholastic journalism programs have already embraced the capabilities of the Internet for virtual collaboration, dissemination, and reader participation. Because of this, scholastic journalism can act as a test-bed for integrating web-based sharing and collaboration practices into classrooms. Student Journalism 2.0 was a research project to integrate open copyright licenses into two scholastic journalism programs, to document outcomes, and to identify recommendations and remaining challenges for similar integrations. Video and audio recordings of two participating high school journalism programs informed the research. In describing the steps of our integration process, we note some important legal, technical, and social challenges. Legal worries such as uncertainty over copyright ownership could lead districts and administrators to disallow open licensing of student work. Publication platforms among journalism classrooms are far from standardized, making any integration of new technologies and practices difficult to achieve at scale. And teachers and students face challenges re-conceptualizing the role their class work can play online.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Peer-reviewed

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The purpose of this dissertation is to analyse older consumers' adoption of information and communication technology innovations, assess the effect of aging related characteristic, and evaluate older consumers' willingness to apply these technologies in health care services. This topic is considered important, because the population in Finland (as in other welfare states) is aging and thus offers a possibility for marketers, but on the other hand threatens society with increasing costs for healthcare. Innovation adoption has been under research from several aspects in both organizational and consumer research. In the consumer behaviour, several theories have been developed to predict consumer responses to innovation. The present dissertation carefully reviews previous research and takes a closer look at the theory of planned behaviour, technology acceptance model and diffusion of innovations perspective. It is here suggested that there is a possibility that these theories can be combined and complemented to predict the adoption of ICT innovations among aging consumers, taking the aging related personal characteristics into account. In fact, there are very few studies that have concentrated on aging consumers in the innovation research, and thus there was a clear indent for the present research. ICT in the health care context has been studied mainly from the organizational point of view. If the technology is thus applied for the communication between the individual end-user and service provider, the end-user cannot be shrugged off. The present dissertation uses empirical evidence from a survey targeted to 55-79 year old people from one city in Southern-Carelia. The empirical analysis of the research model was mainly based on structural equation modelling that has been found very useful on estimating causal relationships. The tested models were targeted to predict the adoption stage of personal computers and mobile phones, and the adoption intention of future health services that apply these devices for communication. The present dissertation succeeded in modelling the adoption behaviour of mobile phones and PCs as well as adoption intentions of future services. Perceived health status and three components behind it (depression, functional ability, and cognitive ability) were found to influence perception of technology anxiety. Better health leads to less anxiety. The effect of age was assessed as a control variable, in order to evaluate its effect compared to health characteristics. Age influenced technology perceptions, but to lesser extent compared to health. The analyses suggest that the major determinant for current technology adoption is perceived behavioural control, and additionally technology anxiety that indirectly inhibit adoption through perceived control. When focusing on future service intentions, the key issue is perceived usefulness that needs to be highlighted when new services are launched. Besides usefulness, the perception of online service reliability is important and affects the intentions indirectly. To conclude older consumers' adoption behaviour is influenced by health status and age, but also by the perceptions of anxiety and behavioural control. On the other hand, launching new types of health services for aging consumers is possible after the service is perceived reliable and useful.