957 resultados para Data-representation
Resumo:
The self similar branching arrangement of the airways makes the respiratory system an ideal candidate for the application of fractional calculus theory. The fractal geometry is typically characterized by a recurrent structure. This study investigates the identification of a model for the respiratory tree by means of its electrical equivalent based on intrinsic morphology. Measurements were obtained from seven volunteers, in terms of their respiratory impedance by means of its complex representation for frequencies below 5 Hz. A parametric modeling is then applied to the complex valued data points. Since at low-frequency range the inertance is negligible, each airway branch is modeled by using gamma cell resistance and capacitance, the latter having a fractional-order constant phase element (CPE), which is identified from measurements. In addition, the complex impedance is also approximated by means of a model consisting of a lumped series resistance and a lumped fractional-order capacitance. The results reveal that both models characterize the data well, whereas the averaged CPE values are supraunitary and subunitary for the ladder network and the lumped model, respectively.
Resumo:
Introduction: multimodality environment; requirement for greater understanding of the imaging technologies used, the limitations of these technologies, and how to best interpret the results; dose optimization; introduction of new techniques; current practice and best practice; incidental findings, in low-dose CT images obtained as part of the hybrid imaging process, are an increasing phenomenon with advancing CT technology; resultant ethical and medico-legal dilemmas; understanding limitations of these procedures important when reporting images and recommending follow-up; free-response observer performance study was used to evaluate lesion detection in low-dose CT images obtained during attenuation correction acquisitions for myocardial perfusion imaging, on two hybrid imaging systems.
Resumo:
Dissertation submitted in partial fulfilment of the requirements for the Degree of Master of Science in Geospatial Technologies
Resumo:
We show that a self-generated set of combinatorial games, S. may not be hereditarily closed but, strong self-generation and hereditary closure are equivalent in the universe of short games. In [13], the question "Is there a set which will give a non-distributive but modular lattice?" appears. A useful necessary condition for the existence of a finite non-distributive modular L(S) is proved. We show the existence of S such that L(S) is modular and not distributive, exhibiting the first known example. More, we prove a Representation Theorem with Games that allows the generation of all finite lattices in game context. Finally, a computational tool for drawing lattices of games is presented. (C) 2014 Elsevier B.V. All rights reserved.
Resumo:
ABSTRACT OBJECTIVE To develop an assessment tool to evaluate the efficiency of federal university general hospitals. METHODS Data envelopment analysis, a linear programming technique, creates a best practice frontier by comparing observed production given the amount of resources used. The model is output-oriented and considers variable returns to scale. Network data envelopment analysis considers link variables belonging to more than one dimension (in the model, medical residents, adjusted admissions, and research projects). Dynamic network data envelopment analysis uses carry-over variables (in the model, financing budget) to analyze frontier shift in subsequent years. Data were gathered from the information system of the Brazilian Ministry of Education (MEC), 2010-2013. RESULTS The mean scores for health care, teaching and research over the period were 58.0%, 86.0%, and 61.0%, respectively. In 2012, the best performance year, for all units to reach the frontier it would be necessary to have a mean increase of 65.0% in outpatient visits; 34.0% in admissions; 12.0% in undergraduate students; 13.0% in multi-professional residents; 48.0% in graduate students; 7.0% in research projects; besides a decrease of 9.0% in medical residents. In the same year, an increase of 0.9% in financing budget would be necessary to improve the care output frontier. In the dynamic evaluation, there was progress in teaching efficiency, oscillation in medical care and no variation in research. CONCLUSIONS The proposed model generates public health planning and programming parameters by estimating efficiency scores and making projections to reach the best practice frontier.
Resumo:
This paper addresses the calculation of derivatives of fractional order for non-smooth data. The noise is avoided by adopting an optimization formulation using genetic algorithms (GA). Given the flexibility of the evolutionary schemes, a hierarchical GA composed by a series of two GAs, each one with a distinct fitness function, is established.
Resumo:
The morpho-structural evolution of oceanic islands results from competition between volcano growth and partial destruction by mass-wasting processes. We present here a multi-disciplinary study of the successive stages of development of Faial (Azores) during the last 1 Myr. Using high-resolution digital elevation model (DEM), and new K/Ar, tectonic, and magnetic data, we reconstruct the rapidly evolving topography at successive stages, in response to complex interactions between volcanic construction and mass wasting, including the development of a graben. We show that: (1) sub-aerial evolution of the island first involved the rapid growth of a large elongated volcano at ca. 0.85 Ma, followed by its partial destruction over half a million years; (2) beginning about 360 ka a new small edifice grew on the NE of the island, and was subsequently cut by normal faults responsible for initiation of the graben; (3) after an apparent pause of ca. 250 kyr, the large Central Volcano (CV) developed on the western side of the island at ca 120 ka, accumulating a thick pile of lava flows in less than 20 kyr, which were partly channelized within the graben; (4) the period between 120 ka and 40 ka is marked by widespread deformation at the island scale, including westward propagation of faulting and associated erosion of the graben walls, which produced sedimentary deposits; subsequent growth of the CV at 40 ka was then constrained within the graben, with lava flowing onto the sediments up to the eastern shore; (5) the island evolution during the Holocene involves basaltic volcanic activity along the main southern faults and pyroclastic eruptions associated with the formation of a caldera volcano-tectonic depression. We conclude that the whole evolution of Faial Island has been characterized by successive short volcanic pulses probably controlled by brief episodes of regional deformation. Each pulse has been separated by considerable periods of volcanic inactivity during which the Faial graben gradually developed. We propose that the volume loss associated with sudden magma extraction from a shallow reservoir in different episodes triggered incremental downward graben movement, as observed historically, when immediate vertical collapse of up to 2 m was observed along the western segments of the graben at the end of the Capelinhos eruptive crises (1957-58).
Resumo:
Conferência: CONTROLO’2012 - 16-18 July 2012 - Funchal
Resumo:
In the field of appearance-based robot localization, the mainstream approach uses a quantized representation of local image features. An alternative strategy is the exploitation of raw feature descriptors, thus avoiding approximations due to quantization. In this work, the quantized and non-quantized representations are compared with respect to their discriminativity, in the context of the robot global localization problem. Having demonstrated the advantages of the non-quantized representation, the paper proposes mechanisms to reduce the computational burden this approach would carry, when applied in its simplest form. This reduction is achieved through a hierarchical strategy which gradually discards candidate locations and by exploring two simplifying assumptions about the training data. The potential of the non-quantized representation is exploited by resorting to the entropy-discriminativity relation. The idea behind this approach is that the non-quantized representation facilitates the assessment of the distinctiveness of features, through the entropy measure. Building on this finding, the robustness of the localization system is enhanced by modulating the importance of features according to the entropy measure. Experimental results support the effectiveness of this approach, as well as the validity of the proposed computation reduction methods.
Resumo:
Many learning problems require handling high dimensional datasets with a relatively small number of instances. Learning algorithms are thus confronted with the curse of dimensionality, and need to address it in order to be effective. Examples of these types of data include the bag-of-words representation in text classification problems and gene expression data for tumor detection/classification. Usually, among the high number of features characterizing the instances, many may be irrelevant (or even detrimental) for the learning tasks. It is thus clear that there is a need for adequate techniques for feature representation, reduction, and selection, to improve both the classification accuracy and the memory requirements. In this paper, we propose combined unsupervised feature discretization and feature selection techniques, suitable for medium and high-dimensional datasets. The experimental results on several standard datasets, with both sparse and dense features, show the efficiency of the proposed techniques as well as improvements over previous related techniques.
Resumo:
Data analytic applications are characterized by large data sets that are subject to a series of processing phases. Some of these phases are executed sequentially but others can be executed concurrently or in parallel on clusters, grids or clouds. The MapReduce programming model has been applied to process large data sets in cluster and cloud environments. For developing an application using MapReduce there is a need to install/configure/access specific frameworks such as Apache Hadoop or Elastic MapReduce in Amazon Cloud. It would be desirable to provide more flexibility in adjusting such configurations according to the application characteristics. Furthermore the composition of the multiple phases of a data analytic application requires the specification of all the phases and their orchestration. The original MapReduce model and environment lacks flexible support for such configuration and composition. Recognizing that scientific workflows have been successfully applied to modeling complex applications, this paper describes our experiments on implementing MapReduce as subworkflows in the AWARD framework (Autonomic Workflow Activities Reconfigurable and Dynamic). A text mining data analytic application is modeled as a complex workflow with multiple phases, where individual workflow nodes support MapReduce computations. As in typical MapReduce environments, the end user only needs to define the application algorithms for input data processing and for the map and reduce functions. In the paper we present experimental results when using the AWARD framework to execute MapReduce workflows deployed over multiple Amazon EC2 (Elastic Compute Cloud) instances.
Resumo:
Feature selection is a central problem in machine learning and pattern recognition. On large datasets (in terms of dimension and/or number of instances), using search-based or wrapper techniques can be cornputationally prohibitive. Moreover, many filter methods based on relevance/redundancy assessment also take a prohibitively long time on high-dimensional. datasets. In this paper, we propose efficient unsupervised and supervised feature selection/ranking filters for high-dimensional datasets. These methods use low-complexity relevance and redundancy criteria, applicable to supervised, semi-supervised, and unsupervised learning, being able to act as pre-processors for computationally intensive methods to focus their attention on smaller subsets of promising features. The experimental results, with up to 10(5) features, show the time efficiency of our methods, with lower generalization error than state-of-the-art techniques, while being dramatically simpler and faster.
Resumo:
ABSTRACT OBJECTIVE To estimate the prevalence of arterial hypertension and obesity and the population attributable fraction of hypertension that is due to obesity in Brazilian adolescents. METHODS Data from participants in the Brazilian Study of Cardiovascular Risks in Adolescents (ERICA), which was the first national school-based, cross-section study performed in Brazil were evaluated. The sample was divided into 32 geographical strata and clusters from 32 schools and classes, with regional and national representation. Obesity was classified using the body mass index according to age and sex. Arterial hypertension was defined when the average systolic or diastolic blood pressure was greater than or equal to the 95th percentile of the reference curve. Prevalences and 95% confidence intervals (95%CI) of arterial hypertension and obesity, both on a national basis and in the macro-regions of Brazil, were estimated by sex and age group, as were the fractions of hypertension attributable to obesity in the population. RESULTS We evaluated 73,399 students, 55.4% female, with an average age of 14.7 years (SD = 1.6). The prevalence of hypertension was 9.6% (95%CI 9.0-10.3); with the lowest being in the North, 8.4% (95%CI 7.7-9.2) and Northeast regions, 8.4% (95%CI 7.6-9.2), and the highest being in the South, 12.5% (95%CI 11.0-14.2). The prevalence of obesity was 8.4% (95%CI 7.9-8.9), which was lower in the North region and higher in the South region. The prevalences of arterial hypertension and obesity were higher in males. Obese adolescents presented a higher prevalence of hypertension, 28.4% (95%CI 25.5-31.2), than overweight adolescents, 15.4% (95%CI 17.0-13.8), or eutrophic adolescents, 6.3% (95%CI 5.6-7.0). The fraction of hypertension attributable to obesity was 17.8%. CONCLUSIONS ERICA was the first nationally representative Brazilian study providing prevalence estimates of hypertension in adolescents. Regional and sex differences were observed. The study indicates that the control of obesity would lower the prevalence of hypertension among Brazilian adolescents by 1/5.
Resumo:
Dissertation submitted in partial fulfilment of the requirements for the Degree of Master of Science in Geospatial Technologies
Resumo:
Most of the traditional software and database development approaches tend to be serial, not evolutionary and certainly not agile, especially on data-oriented aspects. Most of the more commonly used methodologies are strict, meaning they’re composed by several stages each with very specific associated tasks. A clear example is the Rational Unified Process (RUP), divided into Business Modeling, Requirements, Analysis & Design, Implementation, Testing and Deployment. But what happens when the needs of a well design and structured plan, meet the reality of a small starting company that aims to build an entire user experience solution. Here resource control and time productivity is vital, requirements are in constant change, and so is the product itself. In order to succeed in this environment a highly collaborative and evolutionary development approach is mandatory. The implications of constant changing requirements imply an iterative development process. Project focus is on Data Warehouse development and business modeling. This area is usually a tricky one. Business knowledge is part of the enterprise, how they work, their goals, what is relevant for analyses are internal business processes. Throughout this document it will be explained why Agile Modeling development was chosen. How an iterative and evolutionary methodology, allowed for reasonable planning and documentation while permitting development flexibility, from idea to product. More importantly how it was applied on the development of a Retail Focused Data Warehouse. A productized Data Warehouse built on the knowledge of not one but several client needs. One that aims not just to store usual business areas but create an innovative sets of business metrics by joining them with store environment analysis, converting Business Intelligence into Actionable Business Intelligence.