74 resultados para Computer- and videogames
Resumo:
The aims of this dissertation were 1) to investigate associations of weight status of adolescents with leisure activities, and computer and cell phone use, and 2) to investigate environmental and genetic influences on body mass index (BMI) during adolescence. Finnish twins born in 1983–1987 responded to postal questionnaires at the ages of 11-12 (5184 participants), 14 (4643 participants), and 17 years (4168 participants). Information was obtained on weight and height, leisure activities including television viewing, video viewing, computer games, listening to music, board games, musical instrument playing, reading, arts, crafts, socializing, clubs, sports, and outdoor activities, as well as computer and cell phone use. Activity patterns were studied using latent class analysis. The relationship between leisure activities and weight status was investigated using logistic and linear regression. Genetic and environmental effects on BMI were studied using twin modeling. Of individual leisure activities, sports were associated with decreased overweight risk among boys in both cross-sectional and longitudinal analyses, but among girls only cross-sectionally. Many sedentary leisure activities, such as video viewing (boys/girls), arts (boys), listening to music (boys), crafts (girls), and board games (girls), had positive associations with being overweight. Computer use was associated with a higher prevalence of overweight in cross-sectional analyses. However, musical instrument playing, commonly considered as a sedentary activity, was associated with a decreased overweight risk among boys. Four patterns of leisure activities were found: ‘Active and sociable’, ‘Active but less sociable’, ‘Passive but sociable’, and ‘Passive and solitary’. The prevalence of overweight was generally highest among the ‘Passive and solitary’ adolescents. Overall, leisure activity patterns did not predict overweight risk later in adolescence. An exception were 14-year-old ‘Passive and solitary’ girls who had the greatest risk of becoming overweight by 17 years of age. Heritability of BMI was high (0.58-0.83). Common environmental factors shared by family-members affected the BMI at 11-12 and 14 years but their effect had disappeared by 17 years of age. Additive genetic factors explained 90-96% of the BMI stability across adolescence. Genetic correlations across adolescence were high, which suggests similar genetic effects on BMI throughout adolescence, while unique environmental effects on BMI appeared to vary. These findings suggest that family-based interventions hold promise for obesity prevention into early and middle adolescence, but that later in adolescence obesity prevention should focus on individuals. A useful target could be adolescents' leisure time, and our findings highlight the importance of versatility in leisure activities.
Resumo:
A repetitive sequence collection is one where portions of a base sequence of length n are repeated many times with small variations, forming a collection of total length N. Examples of such collections are version control data and genome sequences of individuals, where the differences can be expressed by lists of basic edit operations. Flexible and efficient data analysis on a such typically huge collection is plausible using suffix trees. However, suffix tree occupies O(N log N) bits, which very soon inhibits in-memory analyses. Recent advances in full-text self-indexing reduce the space of suffix tree to O(N log σ) bits, where σ is the alphabet size. In practice, the space reduction is more than 10-fold, for example on suffix tree of Human Genome. However, this reduction factor remains constant when more sequences are added to the collection. We develop a new family of self-indexes suited for the repetitive sequence collection setting. Their expected space requirement depends only on the length n of the base sequence and the number s of variations in its repeated copies. That is, the space reduction factor is no longer constant, but depends on N / n. We believe the structures developed in this work will provide a fundamental basis for storage and retrieval of individual genomes as they become available due to rapid progress in the sequencing technologies.
Resumo:
The growth of the information economy has been stellar in the last decade. General-purpose technologies such as the computer and the Internet have promoted productivity growth in a large number of industries. The effect on telecommunications, media and technology industries has been particularly strong. These industries include mobile telecommunications, printing and publishing, broadcasting, software, hardware and Internet services. There have been large structural changes, which have led to new questions on business strategies, regulation and policy. This thesis focuses on four such questions and answers them by extending the theoretical literature on platforms. The questions (with short answers) are: (i) Do we need to regulate how Internet service providers discriminate between content providers? (Yes.) (ii) What are the welfare effects of allowing consumers to pay to remove advertisements from advertisement-supported products?(Ambiguous, but those watching ads are worse off.) (iii) Why are some markets characterized by open platforms, extendable by third parties, and some by closed platforms, which are not extendable? (It is a trade-off between intensified competition for consumers and benefits from third parties) (iv) Do private platform providers allow third parties to access their platform when it is socially desirable? (No.)
Resumo:
We present a distributed algorithm that finds a maximal edge packing in O(Δ + log* W) synchronous communication rounds in a weighted graph, independent of the number of nodes in the network; here Δ is the maximum degree of the graph and W is the maximum weight. As a direct application, we have a distributed 2-approximation algorithm for minimum-weight vertex cover, with the same running time. We also show how to find an f-approximation of minimum-weight set cover in O(f2k2 + fk log* W) rounds; here k is the maximum size of a subset in the set cover instance, f is the maximum frequency of an element, and W is the maximum weight of a subset. The algorithms are deterministic, and they can be applied in anonymous networks.
Resumo:
The question what a business-to-business (B2B) collaboration setup and enactment application-system should look like remains open. An important element of such collaboration constitutes the inter-organizational disclosure of business-process details so that the opposing parties may protect their business secrets. For that purpose, eSourcing [37] has been developed as a general businessprocess collaboration concept in the framework of the EU research project Cross- Work. The eSourcing characteristics are guiding for the design and evaluation of an eSourcing Reference Architecture (eSRA) that serves as a starting point for software developers of B2B-collaboration systems. In this paper we present the results of a scenario-based evaluation method conducted with the earlier specified eSourcing Architecture (eSA) that generates as results risks, sensitivity, and tradeoff points that must be paid attention to if eSA is implemented. Additionally, the evaluation method detects shortcomings of eSA in terms of integrated components that are required for electronic B2B-collaboration. The evaluation results are used for the specification of eSRA, which comprises all extensions for incorporating the results of the scenario-based evaluation, on three refinement levels.
Resumo:
In a max-min LP, the objective is to maximise ω subject to Ax ≤ 1, Cx ≥ ω1, and x ≥ 0. In a min-max LP, the objective is to minimise ρ subject to Ax ≤ ρ1, Cx ≥ 1, and x ≥ 0. The matrices A and C are nonnegative and sparse: each row ai of A has at most ΔI positive elements, and each row ck of C has at most ΔK positive elements. We study the approximability of max-min LPs and min-max LPs in a distributed setting; in particular, we focus on local algorithms (constant-time distributed algorithms). We show that for any ΔI ≥ 2, ΔK ≥ 2, and ε > 0 there exists a local algorithm that achieves the approximation ratio ΔI (1 − 1/ΔK) + ε. We also show that this result is the best possible: no local algorithm can achieve the approximation ratio ΔI (1 − 1/ΔK) for any ΔI ≥ 2 and ΔK ≥ 2.
Resumo:
HFST–Helsinki Finite-State Technology ( hfst.sf.net ) is a framework for compiling and applying linguistic descriptions with finite-state methods. HFST currently connects some of the most important finite-state tools for creating morphologies and spellers into one open-source platform and supports extending and improving the descriptions with weights to accommodate the modeling of statistical information. HFST offers a path from language descriptions to efficient language applications in key environments and operating systems. HFST also provides an opportunity to exchange transducers between different software providers in order to get the best out of each finite-state library.
Resumo:
There are numerous formats for writing spellcheckers for open-source systems and there are many descriptions for languages written in these formats. Similarly, for word hyphenation by computer there are TEX rules for many languages. In this paper we demonstrate a method for converting these spell-checking lexicons and hyphenation rule sets into finite-state automata, and present a new finite-state based system for writer’s tools used in current open-source software such as Firefox, OpenOffice.org and enchant via the spell-checking library voikko.