939 resultados para Applyed informatics
Resumo:
Computer vision-based food recognition could be used to estimate a meal's carbohydrate content for diabetic patients. This study proposes a methodology for automatic food recognition, based on the Bag of Features (BoF) model. An extensive technical investigation was conducted for the identification and optimization of the best performing components involved in the BoF architecture, as well as the estimation of the corresponding parameters. For the design and evaluation of the prototype system, a visual dataset with nearly 5,000 food images was created and organized into 11 classes. The optimized system computes dense local features, using the scale-invariant feature transform on the HSV color space, builds a visual dictionary of 10,000 visual words by using the hierarchical k-means clustering and finally classifies the food images with a linear support vector machine classifier. The system achieved classification accuracy of the order of 78%, thus proving the feasibility of the proposed approach in a very challenging image dataset.
Resumo:
Seventy percent of the population in Myanmar lives in rural areas. Although health workers are adequately trained, they are overburdened due to understaffing and insufficient supplies. Literature confirms that information and communication technologies can extend the reach of healthcare. In this paper, we present an SMS-based social network that aims to help health workers to interact with other medical professionals through topic-based message delivery. Topics describe interests of users and the content of message. A message is delivered by matching message content with user interests. Users describe topics as ICD- 10 codes, a comprehensive medical taxonomy. In this ICD-10 coded SMS, a set of prearranged codes provides a common language for users to send structured information that fits inside an SMS.
Resumo:
The widespread deployment of wireless mobile communications enables an almost permanent usage of portable devices, which imposes high demands on the battery of these devices. Indeed, battery lifetime is becoming one the most critical factors on the end-users satisfaction when using wireless communications. In this work, the optimized power save algorithm for continuous media applications (OPAMA) is proposed, aiming at enhancing the energy efficiency on end-users devices. By combining the application specific requirements with data aggregation techniques, {OPAMA} improves the standard {IEEE} 802.11 legacy Power Save Mode (PSM) performance. The algorithm uses the feedback on the end-user expected quality to establish a proper tradeoff between energy consumption and application performance. {OPAMA} was assessed in the OMNeT++ simulator, using real traces of variable bitrate video streaming applications, and in a real testbed employing a novel methodology intended to perform an accurate evaluation concerning video Quality of Experience (QoE) perceived by the end-users. The results revealed the {OPAMA} capability to enhance energy efficiency without degrading the end-user observed QoE, achieving savings up to 44 when compared with the {IEEE} 802.11 legacy PSM.
Resumo:
Content Distribution Networks are mandatory components of modern web architectures, with plenty of vendors offering their services. Despite its maturity, new paradigms and architecture models are still being developed in this area. Cloud Computing, on the other hand, is a more recent concept which has expanded extremely quickly, with new services being regularly added to cloud management software suites such as OpenStack. The main contribution of this paper is the architecture and the development of an open source CDN that can be provisioned in an on-demand, pay-as-you-go model thereby enabling the CDN as a Service paradigm. We describe our experience with integration of CDNaaS framework in a cloud environment, as a service for enterprise users. We emphasize the flexibility and elasticity of such a model, with each CDN instance being delivered on-demand and associated to personalized caching policies as well as an optimized choice of Points of Presence based on exact requirements of an enterprise customer. Our development is based on the framework developed in the Mobile Cloud Networking EU FP7 project, which offers its enterprise users a common framework to instantiate and control services. CDNaaS is one of the core support components in this project as is tasked to deliver different type of multimedia content to several thousands of users geographically distributed. It integrates seamlessly in the MCN service life-cycle and as such enjoys all benefits of a common design environment, allowing for an improved interoperability with the rest of the services within the MCN ecosystem.
Resumo:
SUMMARY There is interest in the potential of companion animal surveillance to provide data to improve pet health and to provide early warning of environmental hazards to people. We implemented a companion animal surveillance system in Calgary, Alberta and the surrounding communities. Informatics technologies automatically extracted electronic medical records from participating veterinary practices and identified cases of enteric syndrome in the warehoused records. The data were analysed using time-series analyses and a retrospective space-time permutation scan statistic. We identified a seasonal pattern of reports of occurrences of enteric syndromes in companion animals and four statistically significant clusters of enteric syndrome cases. The cases within each cluster were examined and information about the animals involved (species, age, sex), their vaccination history, possible exposure or risk behaviour history, information about disease severity, and the aetiological diagnosis was collected. We then assessed whether the cases within the cluster were unusual and if they represented an animal or public health threat. There was often insufficient information recorded in the medical record to characterize the clusters by aetiology or exposures. Space-time analysis of companion animal enteric syndrome cases found evidence of clustering. Collection of more epidemiologically relevant data would enhance the utility of practice-based companion animal surveillance.
Resumo:
Dicto is a declarative language for specifying architectural rules using a single uniform notation. Once defined, rules can automatically be validated using adapted off-the-shelf tools.
Resumo:
Quality data are not only relevant for successful Data Warehousing or Business Intelligence applications; they are also a precondition for efficient and effective use of Enterprise Resource Planning (ERP) systems. ERP professionals in all kinds of businesses are concerned with data quality issues, as a survey, conducted by the Institute of Information Systems at the University of Bern, has shown. This paper demonstrates, by using results of this survey, why data quality problems in modern ERP systems can occur and suggests how ERP researchers and practitioners can handle issues around the quality of data in an ERP software Environment.
Resumo:
miRNAs function to regulate gene expression through post-transcriptional mechanisms to potentially regulate multiple aspects of physiology and development. Whole transcriptome analysis has been conducted on the citron kinase mutant rat, a mutant that shows decreases in brain growth and development. The resulting differences in RNA between mutant and wild-type controls can be used to identify genetic pathways that may be regulated differentially in normal compared to abnormal neurogenesis. The goal of this thesis was to verify, with quantitative reverse transcriptase polymerase chain reaction (qRT-PCR), changes in miRNA expression in Cit-k mutants and wild types. In addition to confirming miRNA expression changes, bio-informatics software TargetScan 5.1 was used to identify potential mRNA targets of the differentially expressed miRNAs. The miRNAs that were confirmed to change include: rno-miR-466c, mmu-miR-493, mmu-miR-297a, hsa-miR-765, and hsa-miR-1270. The TargetScan analysis revealed 347 potential targets which have known roles in development. A subset of these potential targets include genes involved in the Wnt signaling pathway which is known to be an important regulator of stem cell development.
Resumo:
Following up genetic linkage studies to identify the underlying susceptibility gene(s) for complex disease traits is an arduous yet biologically and clinically important task. Complex traits, such as hypertension, are considered polygenic with many genes influencing risk, each with small effects. Chromosome 2 has been consistently identified as a genomic region with genetic linkage evidence suggesting that one or more loci contribute to blood pressure levels and hypertension status. Using combined positional candidate gene methods, the Family Blood Pressure Program has concentrated efforts in investigating this region of chromosome 2 in an effort to identify underlying candidate hypertension susceptibility gene(s). Initial informatics efforts identified the boundaries of the region and the known genes within it. A total of 82 polymorphic sites in eight positional candidate genes were genotyped in a large hypothesis-generating sample consisting of 1640 African Americans, 1339 whites, and 1616 Mexican Americans. To adjust for multiple comparisons, resampling-based false discovery adjustment was applied, extending traditional resampling methods to sibship samples. Following this adjustment for multiple comparisons, SLC4A5, a sodium bicarbonate transporter, was identified as a primary candidate gene for hypertension. Polymorphisms in SLC4A5 were subsequently genotyped and analyzed for validation in two populations of African Americans (N = 461; N = 778) and two of whites (N = 550; N = 967). Again, SNPs within SLC4A5 were significantly associated with blood pressure levels and hypertension status. While not identifying a single causal DNA sequence variation that is significantly associated with blood pressure levels and hypertension status across all samples, the results further implicate SLC4A5 as a candidate hypertension susceptibility gene, validating previous evidence for one or more genes on chromosome 2 that influence hypertension related phenotypes in the population-at-large. The methodology and results reported provide a case study of one approach for following up the results of genetic linkage analyses to identify genes influencing complex traits. ^
Resumo:
Clinical text understanding (CTU) is of interest to health informatics because critical clinical information frequently represented as unconstrained text in electronic health records are extensively used by human experts to guide clinical practice, decision making, and to document delivery of care, but are largely unusable by information systems for queries and computations. Recent initiatives advocating for translational research call for generation of technologies that can integrate structured clinical data with unstructured data, provide a unified interface to all data, and contextualize clinical information for reuse in multidisciplinary and collaborative environment envisioned by CTSA program. This implies that technologies for the processing and interpretation of clinical text should be evaluated not only in terms of their validity and reliability in their intended environment, but also in light of their interoperability, and ability to support information integration and contextualization in a distributed and dynamic environment. This vision adds a new layer of information representation requirements that needs to be accounted for when conceptualizing implementation or acquisition of clinical text processing tools and technologies for multidisciplinary research. On the other hand, electronic health records frequently contain unconstrained clinical text with high variability in use of terms and documentation practices, and without commitmentto grammatical or syntactic structure of the language (e.g. Triage notes, physician and nurse notes, chief complaints, etc). This hinders performance of natural language processing technologies which typically rely heavily on the syntax of language and grammatical structure of the text. This document introduces our method to transform unconstrained clinical text found in electronic health information systems to a formal (computationally understandable) representation that is suitable for querying, integration, contextualization and reuse, and is resilient to the grammatical and syntactic irregularities of the clinical text. We present our design rationale, method, and results of evaluation in processing chief complaints and triage notes from 8 different emergency departments in Houston Texas. At the end, we will discuss significance of our contribution in enabling use of clinical text in a practical bio-surveillance setting.
Resumo:
Este artículo presenta los avances de un trabajo de tesis de Magister en Tecnología Informática Aplicada en Educación de la Facultad de Informática de la UNLP, cuyo tema es “Accesibilidad digital para usuarios con limitaciones visuales y su relación con espacios virtuales de aprendizaje". 2 Aborda el tema de accesibilidad digital desde el marco teórico seleccionado y se mencionan los ejes de análisis dentro del marco del uso de las tecnologías como herramientas que favorecen la cognición. Se enuncia la propuesta de tesis y los primeros resultados. Se realiza una primera comparación donde se discuten las ventajas y desventajas de los espacios digitales al acceder mediante los lectores de pantalla, permitiendo establecer líneas de trabajo futuro.
Resumo:
Characterization of the diets of upper-trophic predators is a key ingredient in management including the development of ecosystem-based fishery management plans, conservation efforts for top predators, and ecological and economic modeling of predator prey interactions. The California Current Predator Diet Database (CCPDD) synthesizes data from published records of predator food habits over the past century. The database includes diet information for 100+ upper-trophic level predator species, based on over 200 published citations from the California Current region of the Pacific Ocean, ranging from Baja, Mexico to Vancouver Island, Canada. We include diet data for all predators that consume forage species: seabirds, cetaceans, pinnipeds, bony and cartilaginous fishes, and a predatory invertebrate; data represent seven discrete geographic regions within the CCS (Canada, WA, OR, CA-n, CA-c, CA-s, Mexico). The database is organized around predator-prey links that represent an occurrence of a predator eating a prey or group of prey items. Here we present synthesized data for the occurrence of 32 forage species (see Table 2 in the affiliated paper) in the diet of pelagic predators (currently submitted to Ecological Informatics). Future versions of the shared-data will include diet information for all prey items consumed, not just the forage species of interest.
Resumo:
The present data set was used as a training set for a Habitat Suitability Model. It contains occurrence (presence-only) of living Lophelia pertusa reefs in the Irish continental margin, which were assembled from databases, cruise reports and publications. A total of 4423 records were inspected and quality assessed to ensure that they (1) represented confirmed living L. pertusa reefs (so excluding 2900 records of dead and isolated coral colony records); (2) were derived from sampling equipment that allows for accurate (<200 m) geo-referencing (so excluding 620 records derived mainly from trawling and dredging activities); and (3) were not duplicated. A total of 245 occurrences were retained for the analysis. Coral observations are highly clustered in regions targeted by research expeditions, which might lead to falsely inflated model evaluation measures (Veloz, 2009). Therefore, we coarsened the distribution data by deleting all but one record within grid cells of 0.02° resolution (Davies & Guinotte 2011). The remaining 53 points were subject to a spatial cross-validation process: a random presence point was chosen, grouped with its 12 closest neighbour presence points based on Euclidean distance and withheld from model training. This process was repeated for all records, resulting in 53 replicates of spatially non-overlapping sets of test (n=13) and training (n=40) data. The final 53 occurrence records were used for model training.