961 resultados para number of patent applications
Resumo:
To facilitate large-scale genotype analysis, an efficient PCR-based multiplex approach has been developed. For simultaneously amplifying the target sequences at a large number of genetic loci, locus-specific primers containing 5' universal tails are used. Attaching the universal tails to the target sequences in the initial PCR steps allows replacement of all specific primers with a pair of primers identical to the universal tails and converts the multiplex amplification into "uniplex." Simultaneous amplification of 26 genetic loci with this approach is described. The multiplex amplification can be coupled with genotype determination. By incorporating a single-base mismatch between a primer and the template into the target sequences, a polymorphic site can be converted into a desirable restriction fragment length polymorphism when it is necessary. In this way, the allelic PCR products for the polymorphic loci can be discriminated by gel electrophoresis after restriction enzyme digestion. In this study, 32 loci were typed in such a multiplex way.
Resumo:
In this paper we examine the time T to reach a critical number K0 of infections during an outbreak in an epidemic model with infective and susceptible immigrants. The underlying process X, which was first introduced by Ridler-Rowe (1967), is related to recurrent diseases and it appears to be analytically intractable. We present an approximating model inspired from the use of extreme values, and we derive formulae for the Laplace-Stieltjes transform of T and its moments, which are evaluated by using an iterative procedure. Numerical examples are presented to illustrate the effects of the contact and removal rates on the expected values of T and the threshold K0, when the initial time instant corresponds to an invasion time. We also study the exact reproduction number Rexact,0 and the population transmission number Rp, which are random versions of the basic reproduction number R0.
Resumo:
This dissertation introduces an approach to generate tests to test fail-safe behavior for web applications. We apply the approach to a commercial web application. We build models for both behavioral and mitigation requirements. We create mitigation tests from an existing functional black box test suite by determining failure type and points of failure in the test suite and weaving required mitigation based on weaving rules to generate a test suite that tests proper mitigation of failures. A genetic algorithm (GA) is used to determine points of failure and type of failure that needs to be tested. Mitigation test paths are woven into the behavioral test at the point of failure based on failure specific weaving rules. A simulator was developed to evaluate choice of parameters for the genetic algorithm. We showed how to tune the fitness function and performed tuning experiments for GA to determine what values to use for exploration weight and prospecting weight. We found that higher defect densities make prospecting and mining more successful, while lower mitigation defect densities need more exploration. We compare efficiency and effectiveness of the approach. First, the GA approach is compared to random selection. The results show that the GA performance was better than random selection and that the approach was robust when the search space increased. Second, we compare the GA against four coverage criteria. The results of comparison show that test requirements generated by a genetic algorithm (GA) are more efficient than three of the four coverage criteria for large search spaces. They are equally effective. For small search spaces, the genetic algorithm is less effective than three of the four coverage criteria. The fourth coverage criteria is too weak and unable to find all defects in almost all cases. We also present a large case study of a mortgage system at one of our industrial partners and show how we formalize the approach. We evaluate the use of a GA to create test requirements. The evaluation includes choice of initial population, multiplicity of runs and a discussion of the cost of evaluating fitness. Finally, we build a selective regression testing approach based on types of changes (add, delete, or modify) that could occur in the behavioral model, the fault model, the mitigation models, the weaving rules, and the state-event matrix. We provide a systematic method by showing the formalization steps for each type of change to the various models.
Resumo:
We estimated the number of colors perceived by color normal and color-deficient observers when looking at the theoretic limits of object-color stimuli. These limits, the optimal color stimuli, were computed for a color normal observer and CIE standard illuminant D65, and the resultant colors were expressed in the CIELAB and DIN99d color spaces. The corresponding color volumes for abnormal color vision were computed using models simulating for normal trichromatic observers the appearance for dichromats and anomalous trichomats. The number of colors perceived in each case was then computed from the color volumes enclosed by the optimal colors also known as MacAdam limits. It was estimated that dichromats perceive less than 1% of the colors perceived by normal trichromats and that anomalous trichromats perceive 50%–60% for anomalies in the medium-wavelength-sensitive and 60%–70% for anomalies in the long-wavelength-sensitive cones. Complementary estimates obtained similarly for the spectral locus of monochromatic stimuli suggest less impairment for color-deficient observers, a fact that is explained by the two-dimensional nature of the locus.
Resumo:
Purpose: Citations received by papers published within a journal serve to increase its bibliometric impact. The objective of this paper was to assess the influence of publication language, article type, number of authors, and year of publication on the citations received by papers published in Gaceta Sanitaria, a Spanish-language journal of public health. Methods: The information sources were the journal website and the Web of Knowledge, of the Institute of Scientific Information. The period analyzed was from 2007 to 2010. We included original articles, brief original articles, and reviews published within that period. We extracted manually information regarding the variables analyzed and we also differentiated among total citations and self-citations. We constructed logistic regression models to analyze the probability of a Gaceta Sanitaria paper to be cited or not, taking into account the aforementioned independent variables. We also analyzed the probability of receiving citations from non-Spanish authors. Results: Two hundred forty papers fulfilled the inclusion criteria. The included papers received a total of 287 citations, which became 202 when excluding self-citations. The only variable influencing the probability of being cited was the publication year. After excluding never cited papers, time since publication and review papers had the highest probabilities of being cited. Papers in English and review articles had a higher probability of citation from non-Spanish authors. Conclusions: Publication language has no influence on the citations received by a national, non-English journal. Reviews in English have the highest probability of citation from abroad. Editors should decide how to manage this information when deciding policies to raise the bibliometric impact factor of their journals.
Empirical study on the maintainability of Web applications: Model-driven Engineering vs Code-centric
Resumo:
Model-driven Engineering (MDE) approaches are often acknowledged to improve the maintainability of the resulting applications. However, there is a scarcity of empirical evidence that backs their claimed benefits and limitations with respect to code-centric approaches. The purpose of this paper is to compare the performance and satisfaction of junior software maintainers while executing maintainability tasks on Web applications with two different development approaches, one being OOH4RIA, a model-driven approach, and the other being a code-centric approach based on Visual Studio .NET and the Agile Unified Process. We have conducted a quasi-experiment with 27 graduated students from the University of Alicante. They were randomly divided into two groups, and each group was assigned to a different Web application on which they performed a set of maintainability tasks. The results show that maintaining Web applications with OOH4RIA clearly improves the performance of subjects. It also tips the satisfaction balance in favor of OOH4RIA, although not significantly. Model-driven development methods seem to improve both the developers’ objective performance and subjective opinions on ease of use of the method. This notwithstanding, further experimentation is needed to be able to generalize the results to different populations, methods, languages and tools, different domains and different application sizes.
Resumo:
Array measurements have become a valuable tool for site response characterization in a non-invasive way. The array design, i.e. size, geometry and number of stations, has a great influence in the quality of the obtained results. From the previous parameters, the number of available stations uses to be the main limitation for the field experiments, because of the economical and logistical constraints that it involves. Sometimes, from the initially planned array layout, carefully designed before the fieldwork campaign, one or more stations do not work properly, modifying the prearranged geometry. Whereas other times, there is not possible to set up the desired array layout, because of the lack of stations. Therefore, for a planned array layout, the number of operative stations and their arrangement in the array become a crucial point in the acquisition stage and subsequently in the dispersion curve estimation. In this paper we carry out an experimental work to analyze which is the minimum number of stations that would provide reliable dispersion curves for three prearranged array configurations (triangular, circular with central station and polygonal geometries). For the optimization study, we analyze together the theoretical array responses and the experimental dispersion curves obtained through the f-k method. In the case of the f-k method, we compare the dispersion curves obtained for the original or prearranged arrays with the ones obtained for the modified arrays, i.e. the dispersion curves obtained when a certain number of stations n is removed, each time, from the original layout of X geophones. The comparison is evaluated by means of a misfit function, which helps us to determine how constrained are the studied geometries by stations removing and which station or combination of stations affect more to the array capability when they are not available. All this information might be crucial to improve future array designs, determining when it is possible to optimize the number of arranged stations without losing the reliability of the obtained results.
Resumo:
This article continues the investigation of stationarity and regularity properties of infinite collections of sets in a Banach space started in Kruger and López (J. Optim. Theory Appl. 154(2), 2012), and is mainly focused on the application of the stationarity criteria to infinitely constrained optimization problems. We consider several settings of optimization problems which involve (explicitly or implicitly) infinite collections of sets and deduce for them necessary conditions characterizing stationarity in terms of dual space elements—normals and/or subdifferentials.
Resumo:
This introduction provides an overview of the state-of-the-art technology in Applications of Natural Language to Information Systems. Specifically, we analyze the need for such technologies to successfully address the new challenges of modern information systems, in which the exploitation of the Web as a main data source on business systems becomes a key requirement. It will also discuss the reasons why Human Language Technologies themselves have shifted their focus onto new areas of interest very directly linked to the development of technology for the treatment and understanding of Web 2.0. These new technologies are expected to be future interfaces for the new information systems to come. Moreover, we will review current topics of interest to this research community, and will present the selection of manuscripts that have been chosen by the program committee of the NLDB 2011 conference as representative cornerstone research works, especially highlighting their contribution to the advancement of such technologies.
Resumo:
This study aimed to determine the level of computer practical experience in a sample of Spanish nursing students. Each student was given a Spanish language questionnaire, modified from an original used previously with medical students at the Medical School of North Carolina University (USA) and also at the Education Unit of Hospital General Universitario del Mar (Spain). The 10-item self-report questionnaire probed for information about practical experience with computers. A total of 126 students made up the sample. The majority were female (80.2%; n=101). The results showed that just over half (57.1%, n=72) of the students had used a computer game (three or more times before), and that only one third (37.3%, n=47) had the experience of using a word processing package. Moreover, other applications and IT-based facilities (e.g. statistical packages, e-mail, databases, CD-ROM searches, programming languages and computer-assisted learning) had never been used by the majority of students. The student nurses' practical experience was less than that reported for medical students in previous studies.