963 resultados para COUNT DATA MODELS
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-06
Resumo:
Refinement in software engineering allows a specification to be developed in stages, with design decisions taken at earlier stages constraining the design at later stages. Refinement in complex data models is difficult due to lack of a way of defining constraints, which can be progressively maintained over increasingly detailed refinements. Category theory provides a way of stating wide scale constraints. These constraints lead to a set of design guidelines, which maintain the wide scale constraints under increasing detail. Previous methods of refinement are essentially local, and the proposed method does not interfere very much with these local methods. The result is particularly applicable to semantic web applications, where ontologies provide systems of more or less abstract constraints on systems, which must be implemented and therefore refined by participating systems. With the approach of this paper, the concept of committing to an ontology carries much more force. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
Chambers and Quiggin (2000) use state-contingent representations of risky production technologies to establish important theoretical results concerning producer behavior under uncertainty. Unfortunately, perceived problems in the estimation of state-contingent models have limited the usefulness of the approach in policy formulation. We show that fixed and random effects state-contingent production frontiers can be conveniently estimated in a finite mixtures framework. An empirical example is provided. Compared to conventional estimation approaches, we find that estimating production frontiers in a state-contingent framework produces significantly different estimates of elasticities, firm technical efficiencies, and other quantities of economic interest.
Resumo:
Background The 2001 Australian census revealed that adults aged 65 years and over constituted 12.6% of the population, up from 12.1% in 1996. It is projected that this figure will rise to 21% or 5.1 million Australians by 2031. In 1998, 6% (134 000) of adults in Australia aged 65 years and over were residing in nursing homes or hostels and this number is also expected to rise. As skin ages, there is a decreased turnover and replacement of epidermal skin cells, a thinning subcutaneous fat layer and a reduced production of protective oils. These changes can affect the normal functions of the skin such as its role as a barrier to irritants and pathogens, temperature and water regulation. Generally, placement in a long-term care facility indicates an inability of the older person to perform all of the activities of daily living such as skin care. Therefore, skin care management protocols should be available to reduce the likelihood of skin irritation and breakdown and ultimately promote comfort of the older person. Objectives The objective of this review was to determine the best available evidence for the effectiveness and safety of topical skin care regimens for older adults residing in long-term aged care facilities. The primary outcome was the incidence of adverse skin conditions with patient satisfaction considered as a secondary outcome. Search strategy A literature search was performed using the following databases: PubMed (NLM) (1966–4/2003), Embase (1966–4/2003), CINAHL (1966–4/2003), Current Contents (1993–4/2003), Cochrane Library (1966–2/2003), Web of Science (1995–12/2002), Science Citation Index Expanded and ProceedingsFirst (1993–12/2002). Health Technology Assessment websites were also searched. No language restrictions were applied. Selection criteria Systematic reviews of randomised controlled trials, randomised and non-randomised controlled trials evaluating any non-medical intervention or program that aimed to maintain or improve the integrity of skin in older adults were considered for inclusion. Participants were 65 years of age or over and residing in an aged care facility, hospital or long-term care in the community. Studies were excluded if they evaluated pressure-relieving techniques for the prevention of skin breakdown. Data collection and analysis Two independent reviewers assessed study eligibility for inclusion. Study design and quality were tabulated and relative risks, odds ratios, mean differences and associated 95% confidence intervals were calculated from individual comparative studies containing count data. Results The resulting evidence of the effectiveness of topical skin care interventions was variable and dependent upon the skin condition outcome being assessed. The strongest evidence for maintenance of skin condition in incontinent patients found that disposable bodyworn incontinence protection reduced the odds of deterioration of skin condition compared with non-disposable bodyworns. The best evidence for non-pressure relieving topical skin care interventions on pressure sore formation found the no-rinse cleanser Clinisan to be more effective than soap and water at maintaining healthy skin (no ulcers) in elderly incontinent patients in long-term care. The quality of studies examining the effectiveness of topical skin care interventions on the incidence of skin tears was very poor and inconclusive. Topical skin care for prevention of dermatitis found that Sudocrem could reduce the redness of skin compared with zinc cream if applied regularly after each pad change, but not the number of lesions. Topical skin care on dry skin found the Bag Bath/Travel Bath no-rinse skin care cleanser to be more effective at preventing overall skin dryness and most specifically flaking and scaling when compared with the traditional soap and water washing method in residents of a long-term care facility. Information on the safety of topical skin care interventions is lacking. Therefore, because of the lack of evidence, no recommendation on the safety on any intervention included in this review can be made.
Resumo:
Background The identification and characterization of genes that influence the risk of common, complex multifactorial disease primarily through interactions with other genes and environmental factors remains a statistical and computational challenge in genetic epidemiology. We have previously introduced a genetic programming optimized neural network (GPNN) as a method for optimizing the architecture of a neural network to improve the identification of gene combinations associated with disease risk. The goal of this study was to evaluate the power of GPNN for identifying high-order gene-gene interactions. We were also interested in applying GPNN to a real data analysis in Parkinson's disease. Results We show that GPNN has high power to detect even relatively small genetic effects (2–3% heritability) in simulated data models involving two and three locus interactions. The limits of detection were reached under conditions with very small heritability (
Resumo:
Background: The identification and characterization of genes that influence the risk of common, complex multifactorial disease primarily through interactions with other genes and environmental factors remains a statistical and computational challenge in genetic epidemiology. We have previously introduced a genetic programming optimized neural network (GPNN) as a method for optimizing the architecture of a neural network to improve the identification of gene combinations associated with disease risk. The goal of this study was to evaluate the power of GPNN for identifying high-order gene-gene interactions. We were also interested in applying GPNN to a real data analysis in Parkinson's disease. Results: We show that GPNN has high power to detect even relatively small genetic effects (2-3% heritability) in simulated data models involving two and three locus interactions. The limits of detection were reached under conditions with very small heritability (
Resumo:
Semantic data models provide a map of the components of an information system. The characteristics of these models affect their usefulness for various tasks (e.g., information retrieval). The quality of information retrieval has obvious important consequences, both economic and otherwise. Traditionally, data base designers have produced parsimonious logical data models. In spite of their increased size, ontologically clearer conceptual models have been shown to facilitate better performance for both problem solving and information retrieval tasks in experimental settings. The experiments producing evidence of enhanced performance for ontologically clearer models have, however, used application domains of modest size. Data models in organizational settings are likely to be substantially larger than those used in these experiments. This research used an experiment to investigate whether the benefits of improved information retrieval performance associated with ontologically clearer models are robust as the size of the application domains increase. The experiment used an application domain of approximately twice the size as tested in prior experiments. The results indicate that, relative to the users of the parsimonious implementation, end users of the ontologically clearer implementation made significantly more semantic errors, took significantly more time to compose their queries, and were significantly less confident in the accuracy of their queries.
Resumo:
This report presents and evaluates a novel idea for scalable lossy colour image coding with Matching Pursuit (MP) performed in a transform domain. The benefits of the idea of MP performed in the transform domain are analysed in detail. The main contribution of this work is extending MP with wavelets to colour coding and proposing a coding method. We exploit correlations between image subbands after wavelet transformation in RGB colour space. Then, a new and simple quantisation and coding scheme of colour MP decomposition based on Run Length Encoding (RLE), inspired by the idea of coding indexes in relational databases, is applied. As a final coding step arithmetic coding is used assuming uniform distributions of MP atom parameters. The target application is compression at low and medium bit-rates. Coding performance is compared to JPEG 2000 showing the potential to outperform the latter with more sophisticated than uniform data models for arithmetic coder. The results are presented for grayscale and colour coding of 12 standard test images.
Resumo:
Database systems have a user interface one of the components of which will normally be a query language which is based on a particular data model. Typically data models provide primitives to define, manipulate and query databases. Often these primitives are designed to form self-contained query languages. This thesis describes a prototype implementation of a system which allows users to specify queries against the database in a query language whose primitives are not those provided by the actual model on which the database system is based, but those provided by a different data model. The implementation chosen is the Functional Query Language Front End (FQLFE). This uses the Daplex functional data model and query language. Using FQLFE, users can specify the underlying database (based on the relational model) in terms of Daplex. Queries against this specified view can then be made in Daplex. FQLFE transforms these queries into the query language (Quel) of the underlying target database system (Ingres). The automation of part of the Daplex function definition phase is also described and its implementation discussed.
Resumo:
Functional programming has a lot to offer to the developers of global Internet-centric applications, but is often applicable only to a small part of the system or requires major architectural changes. The data model used for functional computation is often simply considered a consequence of the chosen programming style, although inappropriate choice of such model can make integration with imperative parts much harder. In this paper we do the opposite: we start from a data model based on JSON and then derive the functional approach from it. We outline the identified principles and present Jsonya/fn — a low-level functional language that is defined in and operates with the selected data model. We use several Jsonya/fn implementations and the architecture of a recently developed application to show that our approach can improve interoperability and can achieve additional reuse of representations and operations at relatively low cost. ACM Computing Classification System (1998): D.3.2, D.3.4.
Resumo:
This dissertation focused on the longitudinal analysis of business start-ups using three waves of data from the Kauffman Firm Survey. ^ The first essay used the data from years 2004-2008, and examined the simultaneous relationship between a firm's capital structure, human resource policies, and its impact on the level of innovation. The firm leverage was calculated as, debt divided by total financial resources. Index of employee well-being was determined by a set of nine dichotomous questions asked in the survey. A negative binomial fixed effects model was used to analyze the effect of employee well-being and leverage on the count data of patents and copyrights, which were used as a proxy for innovation. The paper demonstrated that employee well-being positively affects the firm's innovation, while a higher leverage ratio had a negative impact on the innovation. No significant relation was found between leverage and employee well-being.^ The second essay used the data from years 2004-2009, and inquired whether a higher entrepreneurial speed of learning is desirable, and whether there is a linkage between the speed of learning and growth rate of the firm. The change in the speed of learning was measured using a pooled OLS estimator in repeated cross-sections. There was evidence of a declining speed of learning over time, and it was concluded that a higher speed of learning is not necessarily a good thing, because speed of learning is contingent on the entrepreneur's initial knowledge, and the precision of the signals he receives from the market. Also, there was no reason to expect speed of learning to be related to the growth of the firm in one direction over another.^ The third essay used the data from years 2004-2010, and determined the timing of diversification activities by the business start-ups. It captured when a start-up diversified for the first time, and explored the association between an early diversification strategy adopted by a firm, and its survival rate. A semi-parametric Cox proportional hazard model was used to examine the survival pattern. The results demonstrated that firms diversifying at an early stage in their lives show a higher survival rate; however, this effect fades over time.^
Resumo:
A compilation of chemical analyses of Pacific Ocean nodules using an x-ray fluorescence technique. The equipment used was a General Electric XRD-5 with a tungsten tube. Lithium fluoride was used as the diffraction element in assaying for all elements above calcium in the atomic table and EDDT was used in conjunction with a helium path for all elements with an atomic number less than calcium. Flow counters were used in conjunction with a pulse height analyzer to eliminate x-ray lines of different but integral orders in gathering count data. The stability of the equipment was found to be excellent by the author. The equipment was calibrated by the use of standard ores made from pure oxide forms of the elements in the nodules and carefully mixed in proportion to the amounts of these elements generally found in the manganese nodules. Chemically analyzed standards of the nodules themselves were also used. As a final check, a known amount of the element in question was added to selected samples of the nodules and careful counts were taken on these samples before and after the addition of the extra amount of the element. The method involved the determination and subsequent use of absorption and activation factors for the lines of the various elements. All the absorption and activation factors were carefully determined using the standard ores. The chemically analyzed samples of the nodules by these methods yielded an accuracy to at least three significant figures.
Resumo:
Poverty (low income) dynamics are explored using tax filer data covering the period 1992 to 1996. The distributions of short-and long-term episodes are identified, and reveal substantial differences by sex and family type. Entry and exit models explore the relationships between poverty transitions and sex, family status and other personal and situational attributes. Duration effects on exiting and re-entering poverty are found to be important, and models including past poverty experiences point to strong "occurrence dependence" for poverty entry and incidence. Fixed-effect panel data models confirm the above, and reveal asymmetries in the impacts of household transitions on poverty.
Resumo:
Archaeozoological mortality profiles have been used to infer site-specific subsistence strategies. There is however no common agreement on the best way to present these profiles and confidence intervals around age class proportions. In order to deal with these issues, we propose the use of the Dirichlet distribution and present a new approach to perform age-at-death multivariate graphical comparisons. We demonstrate the efficiency of this approach using domestic sheep/goat dental remains from 10 Cardial sites (Early Neolithic) located in South France and the Iberian Peninsula. We show that the Dirichlet distribution in age-at-death analysis can be used: (i) to generate Bayesian credible intervals around each age class of a mortality profile, even when not all age classes are observed; and (ii) to create 95% kernel density contours around each age-at-death frequency distribution when multiple sites are compared using correspondence analysis. The statistical procedure we present is applicable to the analysis of any categorical count data and particularly well-suited to archaeological data (e.g. potsherds, arrow heads) where sample sizes are typically small.
Resumo:
Detailed knowledge on genetic diversity among germplasm is important for hybrid maize ( Zea mays L.) breeding. The objective of the study was to determine genetic diversity in widely grown hybrids in Southern Africa, and compare effectiveness of phenotypic analysis models for determining genetic distances between hybrids. Fifty hybrids were evaluated at one site with two replicates. The experiment was a randomized complete block design. Phenotypic and genotypic data were analyzed using SAS and Power Marker respectively. There was significant (p < 0.01) variation and diversity among hybrid brands but small within brand clusters. Polymorphic Information Content (PIC) ranged from 0.07 to 0.38 with an average of 0.34 and genetic distance ranged from 0.08 to 0.50 with an average of 0.43. SAH23 and SAH21 (0.48) and SAH33 and SAH3 (0.47) were the most distantly related hybrids. Both single nucleotide polymorphism (SNP) markers and phenotypic data models were effective for discriminating genotypes according to genetic distance. SNP markers revealed nine clusters of hybrids. The 12-trait phenotypic analysis model, revealed eight clusters at 85%, while the five-trait model revealed six clusters. Path analysis revealed significant direct and indirect effects of secondary traits on yield. Plant height and ear height were negatively correlated with grain yield meaning shorter hybrids gave high yield. Ear weight, days to anthesis, and number of ears had highest positive direct effects on yield. These traits can provide good selection index for high yielding maize hybrids. Results confirmed that diversity of hybrids is small within brands and also confirm that phenotypic trait models are effective for discriminating hybrids.