994 resultados para Software eutils-search
Resumo:
In order to upgrade the reliability of xenodiagnosis, attention has been directed towards population dynamics of the parasite, with particular interest for the following factors: 1. Parasite density which by itself is not a research objective, but by giving an accurate portrayal of parasite development and multiplication, has been incorporated in screening of bugs for xenodiagnosis. 2. On the assumption that food availability might increase parasite density, bugs from xenodiagnosis have been refed at biweekly intervals on chicken blood. 3. Infectivity rates and positives harbouring large parasite yields were based on gut infections, in which the parasite population comprised of all developmental forms was more abundant and easier to detect than in fecal infections, thus minimizing the probability of recording false negatives. 4. Since parasite density, low in the first 15 days of infection, increases rapidly in the following 30 days, the interval of 45 days has been adopted for routine examination of bugs from xenodiagnosis. By following the enumerated measures, all aiming to reduce false negative cases, we are getting closer to a reliable xenodiagnostic procedure. Upgrading the efficacy of xenodiagnosis is also dependent on the xenodiagnostic agent. Of 9 investigated vector species, Panstrongylus megistus deserves top priority as a xenodiagnostic agent. Its extraordinary capability to support fast development and vigorous multiplication of the few parasites, ingested from the host with chronic Chagas' disease, has been revealed by the strikingly close infectivity rates of 91.2% vs. 96.4% among bugs engorged from the same host in the chronic and acute phase of the disease respectively (Table V), the latter comporting an estimated number of 12.3 x 10[raised to the power of 3] parasites in the circulation at the time of xenodiagnosis, as reported previously by the authors (1982).
Resumo:
The recent findings on immunodiagnosis of schistosomiasis mansoni have shown that purified Schistosoma mansoni antigens do not provide maximum positivity. Therefore, the authors suggest the use of semi-purified antigens for diagnostic purposes. So far, no serological marker for cured patients as shown by negative stool examination was found. However, a tendency of IgG antibody titre decrease was observed, when egg antigen was used.
Resumo:
Desde 1999 el Consorcio de Bibliotecas Universitarias de Cataluña (CBUC) ha creado una nueva línea de trabajo, junto con el Centro de Supercomputación de Cataluña (CESCA), para promocionar la investigación que se lleva a cabo en Cataluña y al mismo tiempo contribuir al movimiento mundial de depositar la producción académica y de investigación en la red de forma abierta. Este movimiento mundial, que recibe el nombre de Open Access, ha sido puesto en marcha con la finalidad de crear alternativas al paradigma de pagar por tener acceso a la información que se ha elaborado, muy a menudo, con financiación y recursos públicos. Esta nueva línea de trabajo son los depósitos institucionales. En esta comunicación presentamos brevemente el estado actual de los depósitos cooperativos implementados, su contenido (estándares usados, derechos de autor, preservación, etc.) y su continente (programas y tecnología utilizada, protocolos, etc.).
Resumo:
Therapeutic drug monitoring (TDM) aims to optimize treatments by individualizing dosage regimens based on the measurement of blood concentrations. Dosage individualization to maintain concentrations within a target range requires pharmacokinetic and clinical capabilities. Bayesian calculations currently represent the gold standard TDM approach but require computation assistance. In recent decades computer programs have been developed to assist clinicians in this assignment. The aim of this survey was to assess and compare computer tools designed to support TDM clinical activities. The literature and the Internet were searched to identify software. All programs were tested on personal computers. Each program was scored against a standardized grid covering pharmacokinetic relevance, user friendliness, computing aspects, interfacing and storage. A weighting factor was applied to each criterion of the grid to account for its relative importance. To assess the robustness of the software, six representative clinical vignettes were processed through each of them. Altogether, 12 software tools were identified, tested and ranked, representing a comprehensive review of the available software. Numbers of drugs handled by the software vary widely (from two to 180), and eight programs offer users the possibility of adding new drug models based on population pharmacokinetic analyses. Bayesian computation to predict dosage adaptation from blood concentration (a posteriori adjustment) is performed by ten tools, while nine are also able to propose a priori dosage regimens, based only on individual patient covariates such as age, sex and bodyweight. Among those applying Bayesian calculation, MM-USC*PACK© uses the non-parametric approach. The top two programs emerging from this benchmark were MwPharm© and TCIWorks. Most other programs evaluated had good potential while being less sophisticated or less user friendly. Programs vary in complexity and might not fit all healthcare settings. Each software tool must therefore be regarded with respect to the individual needs of hospitals or clinicians. Programs should be easy and fast for routine activities, including for non-experienced users. Computer-assisted TDM is gaining growing interest and should further improve, especially in terms of information system interfacing, user friendliness, data storage capability and report generation.
Resumo:
This paper reports on: (a) new primary source evidence on; and (b) statistical and econometric analysis of high technology clusters in Scotland. It focuses on the following sectors: software, life sciences, microelectronics, optoelectronics, and digital media. Evidence on a postal and e-mailed questionnaire is presented and discussed under the headings of: performance, resources, collaboration & cooperation, embeddedness, and innovation. The sampled firms are characterised as being small (viz. micro-firms and SMEs), knowledge intensive (largely graduate staff), research intensive (mean spend on R&D GBP 842k), and internationalised (mainly selling to markets beyond Europe). Preliminary statistical evidence is presented on Gibrat’s Law (independence of growth and size) and the Schumpeterian Hypothesis (scale economies in R&D). Estimates suggest a short-run equilibrium size of just 100 employees, but a long-run equilibrium size of 1000 employees. Further, to achieve the Schumpeterian effect (of marked scale economies in R&D), estimates suggest that firms have to grow to very much larger sizes of beyond 3,000 employees. We argue that the principal way of achieving the latter scale may need to be by takeovers and mergers, rather than by internally driven growth.
Resumo:
Projecte d'adaptació del programa GNU Chess al sistema de grid computing 'Condor'. I amb això, es planteja un estudi sobre els algorismes de cerca i la seva aplicació en entorns distribuïts. Una sèrie de proves sobre unes mostres de una partida d'escacs contra el propi GNU Chess ens ajuden a posar de relleu els avantatges i inconvenients de cada un dels algorismes proposats.
Resumo:
El present treball compta amb dues parts. La primera es una recopilació sobre temes relacionats amb el correu electrònic i el seu ús: la seva història; els elements que el composen; serveis i programes que ofereix, l´ús d’aquesta eina; l’importància d´aquest dins del e-marketing; la seva efectivitat com a eina de marketing; atributs que se li assignen; les seves principals aplicacions; legislació que el regula; i altres dades que poden ser de gran utilitat a l´hora de fer una tramesa de correu electrònic. La segona part d´aquest treball conté una investigació quantitativa sobre alguns elements o variables que poden influir en l´efectivitat final de la tramesa massiva de correus electrònics realitzada per una empresa amb finalitat comercial.
Resumo:
This paper develops methods for Stochastic Search Variable Selection (currently popular with regression and Vector Autoregressive models) for Vector Error Correction models where there are many possible restrictions on the cointegration space. We show how this allows the researcher to begin with a single unrestricted model and either do model selection or model averaging in an automatic and computationally efficient manner. We apply our methods to a large UK macroeconomic model.
Resumo:
This paper develops stochastic search variable selection (SSVS) for zero-inflated count models which are commonly used in health economics. This allows for either model averaging or model selection in situations with many potential regressors. The proposed techniques are applied to a data set from Germany considering the demand for health care. A package for the free statistical software environment R is provided.
Resumo:
This paper shows how one of the developers of QWERTY continued to use the trade secret that underlay its development to seek further efficiency improvements after its introduction. It provides further evidence that this was the principle used to design QWERTY in the first place and adds further weight to arguments that QWERTY itself was a consequence of creative design and an integral part of a highly efficient system rather than an accident of history. This further serves to raise questions over QWERTY's forced servitude as 'paradigm case' of inferior standard in the path dependence literature. The paper also shows how complementarities in forms of intellectual property rights protection played integral roles in the development of QWERTY and the search for improvements on it, and also helped effectively conceal the source of the efficiency advantages that QWERTY helped deliver.
Resumo:
We consider a frictional two-sided matching market in which one side uses public cheap talk announcements so as to attract the other side. We show that if the first-price auction is adopted as the trading protocol, then cheap talk can be perfectly informative, and the resulting market outcome is efficient, constrained only by search frictions. We also show that the performance of an alternative trading protocol in the cheap-talk environment depends on the level of price dispersion generated by the protocol: If a trading protocol compresses (spreads) the distribution of prices relative to the first-price auction, then an efficient fully revealing equilibrium always (never) exists. Our results identify the settings in which cheap talk can serve as an efficient competitive instrument, in the sense that the central insights from the literature on competing auctions and competitive search continue to hold unaltered even without ex ante price commitment.
Resumo:
In a market in which sellers compete by posting mechanisms, we study how the properties of the meeting technology affect the mechanism that sellers select. In general, sellers have incentive to use mechanisms that are socially efficient. In our environment, sellers achieve this by posting an auction with a reserve price equal to their own valuation, along with a transfer that is paid by (or to) all buyers with whom the seller meets. However, we define a novel condition on meeting technologies, which we call “invariance,” and show that the transfer is equal to zero if and only if the meeting technology satisfies this condition.
Resumo:
We develop a life-cycle model of the labor market in which different worker-firm matches have different quality and the assignment of the right workers to the right firms is time consuming because of search and learning frictions. The rate at which workers move between unemployment, employment and across different firms is endogenous because search is directed and, hence, workers can choose whether to seek low-wage jobs that are easy to find or high-wage jobs that are hard to find. We calibrate our theory using data on labor market transitions aggregated across workers of different ages. We validate our theory by showing that it predicts quite well the pattern of labor market transitions for workers of different ages. Finally, we use our theory to decompose the age profiles of transition rates, wages and productivity into the effects of age variation in work-life expectancy, human capital and match quality.