917 resultados para structuration of lexical data bases
Resumo:
INTRODUCTION:With the ease provided by current computational programs, medical and scientific journals use bar graphs to describe continuous data.METHODS:This manuscript discusses the inadequacy of bars graphs to present continuous data.RESULTS:Simulated data show that box plots and dot plots are more-feasible tools to describe continuous data.CONCLUSIONS:These plots are preferred to represent continuous variables since they effectively describe the range, shape, and variability of observations and clearly identify outliers. By contrast, bar graphs address only measures of central tendency. Bar graphs should be used only to describe qualitative data.
Resumo:
Programa Doutoral em Matemática e Aplicações.
Resumo:
Distributed data aggregation is an important task, allowing the de- centralized determination of meaningful global properties, that can then be used to direct the execution of other applications. The resulting val- ues result from the distributed computation of functions like count, sum and average. Some application examples can found to determine the network size, total storage capacity, average load, majorities and many others. In the last decade, many di erent approaches have been pro- posed, with di erent trade-o s in terms of accuracy, reliability, message and time complexity. Due to the considerable amount and variety of ag- gregation algorithms, it can be di cult and time consuming to determine which techniques will be more appropriate to use in speci c settings, jus- tifying the existence of a survey to aid in this task. This work reviews the state of the art on distributed data aggregation algorithms, providing three main contributions. First, it formally de nes the concept of aggrega- tion, characterizing the di erent types of aggregation functions. Second, it succinctly describes the main aggregation techniques, organizing them in a taxonomy. Finally, it provides some guidelines toward the selection and use of the most relevant techniques, summarizing their principal characteristics.
Resumo:
Genome-scale metabolic models are valuable tools in the metabolic engineering process, based on the ability of these models to integrate diverse sources of data to produce global predictions of organism behavior. At the most basic level, these models require only a genome sequence to construct, and once built, they may be used to predict essential genes, culture conditions, pathway utilization, and the modifications required to enhance a desired organism behavior. In this chapter, we address two key challenges associated with the reconstruction of metabolic models: (a) leveraging existing knowledge of microbiology, biochemistry, and available omics data to produce the best possible model; and (b) applying available tools and data to automate the reconstruction process. We consider these challenges as we progress through the model reconstruction process, beginning with genome assembly, and culminating in the integration of constraints to capture the impact of transcriptional regulation. We divide the reconstruction process into ten distinct steps: (1) genome assembly from sequenced reads; (2) automated structural and functional annotation; (3) phylogenetic tree-based curation of genome annotations; (4) assembly and standardization of biochemistry database; (5) genome-scale metabolic reconstruction; (6) generation of core metabolic model; (7) generation of biomass composition reaction; (8) completion of draft metabolic model; (9) curation of metabolic model; and (10) integration of regulatory constraints. Each of these ten steps is documented in detail.
Resumo:
Driven by concerns about rising energy costs, security of supply and climate change a new wave of Sustainable Energy Technologies (SET’s) have been embraced by the Irish consumer. Such systems as solar collectors, heat pumps and biomass boilers have become common due to government backed financial incentives and revisions of the building regulations. However, there is a deficit of knowledge and understanding of how these technologies operate and perform under Ireland’s maritime climate. This AQ-WBL project was designed to address both these needs by developing a Data Acquisition (DAQ) system to monitor the performance of such technologies and a web-based learning environment to disseminate performance characteristics and supplementary information about these systems. A DAQ system consisting of 108 sensors was developed as part of Galway-Mayo Institute of Technology’s (GMIT’s) Centre for the Integration of Sustainable EnergyTechnologies (CiSET) in an effort to benchmark the performance of solar thermal collectors and Ground Source Heat Pumps (GSHP’s) under Irish maritime climate, research new methods of integrating these systems within the built environment and raise awareness of SET’s. It has operated reliably for over 2 years and has acquired over 25 million data points. Raising awareness of these SET’s is carried out through the dissemination of the performance data through an online learning environment. A learning environment was created to provide different user groups with a basic understanding of a SET’s with the support of performance data, through a novel 5 step learning process and two examples were developed for the solar thermal collectors and the weather station which can be viewed at http://www.kdp 1 .aquaculture.ie/index.aspx. This online learning environment has been demonstrated to and well received by different groups of GMIT’s undergraduate students and plans have been made to develop it further to support education, awareness, research and regional development.
Resumo:
Visualistics, computer science, picture syntax, picture semantics, picture pragmatics, interactive pictures
Resumo:
Magdeburg, Univ., Fak. für Informatik, Diss., 2010
Resumo:
Magdeburg, Univ., Fak. für Informatik, Diss., 2014
Resumo:
Magdeburg, Univ., Fak. für Informatik, Diss., 2014
Resumo:
This paper provides empirical evidence that continuous time models with one factor of volatility, in some conditions, are able to fit the main characteristics of financial data. It also reports the importance of the feedback factor in capturing the strong volatility clustering of data, caused by a possible change in the pattern of volatility in the last part of the sample. We use the Efficient Method of Moments (EMM) by Gallant and Tauchen (1996) to estimate logarithmic models with one and two stochastic volatility factors (with and without feedback) and to select among them.
Resumo:
New methods of analysis of patent statistics allow assessing country profiles of technological specialization for the period 1990-2006. We witness a modest decrease in levels of specialization, which we show to be negatively influenced by country size and degree of internationalization of inventive activities.
Resumo:
In this paper we analyze the persistence of aggregate real exchange rates (RERs) for a group of EU-15 countries by using sectoral data. The tight relation between aggregate and sectoral persistence recently investigated by Mayoral (2008) allows us to decompose aggregate RER persistence into the persistence of its different subcomponents. We show that the distribution of sectoral persistence is highly heterogeneous and very skewed to the right, and that a limited number of sectors are responsible for the high levels of persistence observed at the aggregate level. We use quantile regression to investigate whether the traditional theories proposed to account for the slow reversion to parity (lack of arbitrage due to nontradibilities or imperfect competition and price stickiness) are able to explain the behavior of the upper quantiles of sectoral persistence. We conclude that pricing to market in the intermediate goods sector together with price stickiness have more explanatory power than variables related to the tradability of the goods or their inputs.
Resumo:
BACKGROUND: We analysed 5-year treatment with agalsidase alfa enzyme replacement therapy in patients with Fabry's disease who were enrolled in the Fabry Outcome Survey observational database (FOS). METHODS: Baseline and 5-year data were available for up to 181 adults (126 men) in FOS. Serial data for cardiac mass and function, renal function, pain, and quality of life were assessed. Safety and sensitivity analyses were done in patients with baseline and at least one relevant follow-up measurement during the 5 years (n=555 and n=475, respectively). FINDINGS: In patients with baseline cardiac hypertrophy, treatment resulted in a sustained reduction in left ventricular mass (LVM) index after 5 years (from 71.4 [SD 22.5] g/m(2.7) to 64.1 [18.7] g/m(2.7), p=0.0111) and a significant increase in midwall fractional shortening (MFS) from 14.3% (2.3) to 16.0% (3.8) after 3 years (p=0.02). In patients without baseline hypertrophy, LVM index and MFS remained stable. Mean yearly fall in estimated glomerular filtration rate versus baseline after 5 years of enzyme replacement therapy was -3.17 mL/min per 1.73 m(2) for men and -0.89 mL/min per 1.73 m(2) for women. Average pain, measured by Brief Pain Inventory score, improved significantly, from 3.7 (2.3) at baseline to 2.5 (2.4) after 5 years (p=0.0023). Quality of life, measured by deviation scores from normal EuroQol values, improved significantly, from -0.24 (0.3) at baseline to -0.17 (0.3) after 5 years (p=0.0483). Findings were confirmed by sensitivity analysis. No unexpected safety concerns were identified. INTERPRETATION: By comparison with historical natural history data for patients with Fabry's disease who were not treated with enzyme replacement therapy, long-term treatment with agalsidase alfa leads to substantial and sustained clinical benefits. FUNDING: Shire Human Genetic Therapies AB.