953 resultados para open data capabilities
Resumo:
Objectives: To assess the difference in direct medical costs between on-demand (OD) treatment with esomeprazole (E) 20 mg and continuous (C) treatment with E 20 mg q.d. from a clinical practice view in patients with gastroesophageal reflux disease (GERD) symptoms. Methods: This open, randomized study (ONE: on-demand Nexium evaluation) compared two long-term management options with E 20 mg in endoscopically uninvestigated patients seeking primary care for GERD symptoms who demonstrated complete relief of symptoms after an initial treatment of 4 weeks with E 40 mg. Data on consumed quantities of all cost items were collected in the study, while data on prices during the time of study were collected separately. The analysis was done from a societal perspective. Results: Forty-nine percent (484 of 991) of patients randomized to the OD regimen and 46% (420 of 913) of the patients in the C group had at least one contact with the investigator that would have occurred nonprotocol-driven. The difference of the adjusted mean direct medical costs between the treatment groups was CHF 88.72 (95% confidence interval: CHF 41.34-153.95) in favor of the OD treatment strategy (Wilcoxon rank-sum test: P < 0.0001). Adjusted direct nonmedical costs and productivity loss were similar in both groups. Conclusions: The adjusted direct medical costs of a 6-month OD treatment with esomeprazole 20 mg in uninvestigated patients with symptoms of GERD were significantly lower compared with a continuous treatment with E 20 mg once a day. The OD therapy represents a cost-saving alternative to the continuous treatment strategy with E.
Resumo:
Learning object repositories are a basic piece of virtual learning environments used for content management. Nevertheless, learning objects have special characteristics that make traditional solutions for content management ine ective. In particular, browsing and searching for learning objects cannot be based on the typical authoritative meta-data used for describing content, such as author, title or publicationdate, among others. We propose to build a social layer on top of a learning object repository, providing nal users with additional services fordescribing, rating and curating learning objects from a teaching perspective. All these interactions among users, services and resources can be captured and further analyzed, so both browsing and searching can be personalized according to user pro le and the educational context, helping users to nd the most valuable resources for their learning process. In this paper we propose to use reputation schemes and collaborative filtering techniques for improving the user interface of a DSpace based learning object repository.
Resumo:
The reason for this study is to propose a new quantitative approach on how to assess the quality of Open Access University Institutional Repositories. The results of this new approach are tested in the Spanish University Repositories. The assessment method is based in a binary codification of a proposal of features that objectively describes the repositories. The purposes of this method are assessing the quality and an almost automatically system for updating the data of the characteristics. First of all a database was created with the 38 Spanish institutional repositories. The variables of analysis are presented and explained either if they are coming from bibliography or are a set of new variables. Among the characteristics analyzed are the features of the software, the services of the repository, the features of the information system, the Internet visibility and the licenses of use. Results from Spanish universities ARE provided as a practical example of the assessment and for having a picture of the state of the development of the open access movement in Spain.
Resumo:
There are a number of morphological analysers for Polish. Most of these, however, are non-free resources. What is more, different analysers employ different tagsets and tokenisation strategies. This situation calls for a simpleand universal framework to join different sources of morphological information, including the existing resources as well as user-provided dictionaries. We present such a configurable framework that allows to write simple configuration files that define tokenisation strategies and the behaviour of morphologicalanalysers, including simple tagset conversion.
Resumo:
This paper presents an Italian to CatalanRBMT system automatically built bycombining the linguistic data of theexisting pairs Spanish-Catalan andSpanish-Italian. A lightweight manualpostprocessing is carried out in order tofix inconsistencies in the automaticallyderived dictionaries and to add very frequentwords that are missing accordingto a corpus analysis. The system isevaluated on the KDE4 corpus and outperformsGoogle Translate by approximatelyten absolute points in terms ofboth TER and GTM.
Resumo:
The objetive of this work was to evaluate the influence of intergenotypic competition in open-pollinated families of Eucalyptus and its effects on early selection efficiency. Two experiments were carried out, in which the timber volume was evaluated at three ages, in a randomized complete block design. Data from the three years of evaluation (experiment 1, at 2, 4, and 7 years; and experiment 2, at 2, 5, and 7 years) were analyzed using mixed models. The following were estimated: variance components, genetic parameters, selection gains, effective number, early selection efficiency, selection gain per unit time, and coincidence of selection with and without the use of competition covariates. Competition effect was nonsignificant for ages under three years, and adjustment using competition covariates was unnecessary. Early selection for families is effective; families that have a late growth spurt are more vulnerable to competition, which markedly impairs ranking at the end of the cycle. Early selection is efficient according to all adopted criteria, and the age of around three years is the most recommended, given the high efficiency and accuracy rate in the indication of trees and families. The addition of competition covariates at the end of the cycle improves early selection efficiency for almost all studied criteria.
Resumo:
Tapahtumankäsittelyä pidetään yleisesti eräänä luotettavan tietojenkäsittelyn perusvaatimuksena. Tapahtumalla tarkoitetaan operaatiosarjaa, jonka suoritusta voidaan pitää yhtenä loogisena toimenpiteenä. Tapahtumien suoritukselle on asetettu neljä perussääntöä, joista käytetään lyhennettä ACID. Hajautetuissa järjestelmissä tapahtumankäsittelyn tarve kasvaa entisestään, sillä toimenpiteiden onnistumista ei voida varmistaa pelkästään paikallisten menetelmien avulla. Hajautettua tapahtumankäsittelyä on yritetty standardoida useaan otteeseen, muttayrityksistä huolimatta siitä ei ole olemassa yleisesti hyväksyttyjä ja avoimia standardeja. Lähimpänä tällaisen standardin asemaa on todennäköisesti X/Open DTP-standardiperhe ja varsinkin siihen kuuluva XA-standardi. Tässä työssä on lisäksi tutkittu, kuinka Intellitel ONE-järjestelmän valmistajariippumatonta tietokanta-arkkitehtuuria tulisi kehittää, kun tavoitteena on mahdollistaa sen avulla suoritettavien tapahtumankäsittelyä vaativien sovellusten käyttäminen.
Resumo:
The diffusion of mobile telephony began in 1971 in Finland, when the first car phones, called ARP1 were taken to use. Technologies changed from ARP to NMT and later to GSM. The main application of the technology, however, was voice transfer. The birth of the Internet created an open public data network and easy access to other types of computer-based services over networks. Telephones had been used as modems, but the development of the cellular technologies enabled automatic access from mobile phones to Internet. Also other wireless technologies, for instance Wireless LANs, were also introduced. Telephony had developed from analog to digital in fixed networks and allowed easy integration of fixed and mobile networks. This development opened a completely new functionality to computers and mobile phones. It also initiated the merger of the information technology (IT) and telecommunication (TC) industries. Despite the arising opportunity for firms' new competition the applications based on the new functionality were rare. Furthermore, technology development combined with innovation can be disruptive to industries. This research focuses on the new technology's impact on competition in the ICT industry through understanding the strategic needs and alternative futures of the industry's customers. The change speed inthe ICT industry is high and therefore it was valuable to integrate the DynamicCapability view of the firm in this research. Dynamic capabilities are an application of the Resource-Based View (RBV) of the firm. As is stated in the literature, strategic positioning complements RBV. This theoretical framework leads theresearch to focus on three areas: customer strategic innovation and business model development, external future analysis, and process development combining these two. The theoretical contribution of the research is in the development of methodology integrating theories of the RBV, dynamic capabilities and strategic positioning. The research approach has been constructive due to the actual managerial problems initiating the study. The requirement for iterative and innovative progress in the research supported the chosen research approach. The study applies known methods in product development, for instance, innovation process in theGroup Decision Support Systems (GDSS) laboratory and Quality Function Deployment (QFD), and combines them with known strategy analysis tools like industry analysis and scenario method. As the main result, the thesis presents the strategic innovation process, where new business concepts are used to describe the alternative resource configurations and scenarios as alternative competitive environments, which can be a new way for firms to achieve competitive advantage in high-velocity markets. In addition to the strategic innovation process as a result, thestudy has also resulted in approximately 250 new innovations for the participating firms, reduced technology uncertainty and helped strategic infrastructural decisions in the firms, and produced a knowledge-bank including data from 43 ICT and 19 paper industry firms between the years 1999 - 2004. The methods presentedin this research are also applicable to other industries.
A priori parameterisation of the CERES soil-crop models and tests against several European data sets
Resumo:
Mechanistic soil-crop models have become indispensable tools to investigate the effect of management practices on the productivity or environmental impacts of arable crops. Ideally these models may claim to be universally applicable because they simulate the major processes governing the fate of inputs such as fertiliser nitrogen or pesticides. However, because they deal with complex systems and uncertain phenomena, site-specific calibration is usually a prerequisite to ensure their predictions are realistic. This statement implies that some experimental knowledge on the system to be simulated should be available prior to any modelling attempt, and raises a tremendous limitation to practical applications of models. Because the demand for more general simulation results is high, modellers have nevertheless taken the bold step of extrapolating a model tested within a limited sample of real conditions to a much larger domain. While methodological questions are often disregarded in this extrapolation process, they are specifically addressed in this paper, and in particular the issue of models a priori parameterisation. We thus implemented and tested a standard procedure to parameterize the soil components of a modified version of the CERES models. The procedure converts routinely-available soil properties into functional characteristics by means of pedo-transfer functions. The resulting predictions of soil water and nitrogen dynamics, as well as crop biomass, nitrogen content and leaf area index were compared to observations from trials conducted in five locations across Europe (southern Italy, northern Spain, northern France and northern Germany). In three cases, the model’s performance was judged acceptable when compared to experimental errors on the measurements, based on a test of the model’s root mean squared error (RMSE). Significant deviations between observations and model outputs were however noted in all sites, and could be ascribed to various model routines. In decreasing importance, these were: water balance, the turnover of soil organic matter, and crop N uptake. A better match to field observations could therefore be achieved by visually adjusting related parameters, such as field-capacity water content or the size of soil microbial biomass. As a result, model predictions fell within the measurement errors in all sites for most variables, and the model’s RMSE was within the range of published values for similar tests. We conclude that the proposed a priori method yields acceptable simulations with only a 50% probability, a figure which may be greatly increased through a posteriori calibration. Modellers should thus exercise caution when extrapolating their models to a large sample of pedo-climatic conditions for which they have only limited information.
Resumo:
Background: During the last part of the 1990s the chance of surviving breast cancer increased. Changes in survival functions reflect a mixture of effects. Both, the introduction of adjuvant treatments and early screening with mammography played a role in the decline in mortality. Evaluating the contribution of these interventions using mathematical models requires survival functions before and after their introduction. Furthermore, required survival functions may be different by age groups and are related to disease stage at diagnosis. Sometimes detailed information is not available, as was the case for the region of Catalonia (Spain). Then one may derive the functions using information from other geographical areas. This work presents the methodology used to estimate age- and stage-specific Catalan breast cancer survival functions from scarce Catalan survival data by adapting the age- and stage-specific US functions. Methods: Cubic splines were used to smooth data and obtain continuous hazard rate functions. After, we fitted a Poisson model to derive hazard ratios. The model included time as a covariate. Then the hazard ratios were applied to US survival functions detailed by age and stage to obtain Catalan estimations. Results: We started estimating the hazard ratios for Catalonia versus the USA before and after the introduction of screening. The hazard ratios were then multiplied by the age- and stage-specific breast cancer hazard rates from the USA to obtain the Catalan hazard rates. We also compared breast cancer survival in Catalonia and the USA in two time periods, before cancer control interventions (USA 1975–79, Catalonia 1980–89) and after (USA and Catalonia 1990–2001). Survival in Catalonia in the 1980–89 period was worse than in the USA during 1975–79, but the differences disappeared in 1990–2001. Conclusion: Our results suggest that access to better treatments and quality of care contributed to large improvements in survival in Catalonia. On the other hand, we obtained detailed breast cancer survival functions that will be used for modeling the effect of screening and adjuvant treatments in Catalonia.
Resumo:
A newspaper content management system has to deal with a very heterogeneous information space as the experience in the Diari Segre newspaper has shown us. The greatest problem is to harmonise the different ways the involved users (journalist, archivists...) structure the newspaper information space, i.e. news, topics, headlines, etc. Our approach is based on ontology and differentiated universes of discourse (UoD). Users interact with the system and, from this interaction, integration rules are derived. These rules are based on Description Logic ontological relations for subsumption and equivalence. They relate the different UoD and produce a shared conceptualisation of the newspaper information domain.
Resumo:
The main objective of this research paper was to synthesize, integrate and analyze the theoretical foundation of the resource-based view of the firm on sustainable competitive advantage. Accordingly, this research was a literature research employing the methodology of interpretative study of concept and unobtrusive measures. The core and majority of the research data was gathered from the major online journal databases. Only peer-reviewed articles from highly-esteemed journals on the subject of competitive advantage were used. The theoretical core of the research paper was centred on resources, capabilities, and the sustainability dilemma of competitive advantage. Furthermore, other strategic management concepts relating to the resource-based view of the firm were used with reference to the research objectives. The resource-based view of the firm continues to be a controversial but important are of strategic management research on sustainable competitive advantage. Consequently, the theoretical foundation and the empirical testing of the framework needs further work. However, it is evident that internal organizational factors in the form of resources and capabilities are vital for the formation of sustainable competitive advantage. Resources and capabilities are not, however, valuable on their own - competitive advantage requires seamless interplay and complementarity between bundles of resources and capabilities.
Resumo:
Estudi de viabilitat sobre la implantació d'un software-defined storage open source en entorns empresarials. Comparativa entre Gluster, Ceph, OpenAFS, TahoeFS i XtreemFS.
Resumo:
We present experiments in which the laterally confined flow of a surfactant film driven by controlled surface tension gradients causes the subtended liquid layer to self-organize into an inner upstream microduct surrounded by the downstream flow. The anomalous interfacial flow profiles and the concomitant backflow are a result of the feedback between two-dimensional and three-dimensional microfluidics realized during flow in open microchannels. Bulk and surface particle image velocimetry data combined with an interfacial hydrodynamics model explain the dependence of the observed phenomena on channel geometry.
Resumo:
El artículo revisa los temas principales en la preservación y reuso de los datos de investigación (beneficios, ciclo de vida, proyectos, normativas ) e identifica la falta de un registro mundial de bancos, repositorios y bibliotecas de datos. Expone la creación de una herramienta web que recoja este tipo de depósitos y los clasifique por áreas disciplinares: ODiSEA International Registry on Research Data. Ofrecemos resultados sobre número y tipología temática de este tipo de depósitos a escala mundial. Esta aportación facilita el descubrimiento de nuevos conjuntos de datos cuya recombinación desde una perspectiva multidisciplinar fomentará la innovación y la rentabilidad de la inversión en ciencia.