885 resultados para Incremental Information-content
Resumo:
The fisheries for mackerel scad, Decapterus macarellus, are particularly important in Cape Verde, constituting almost 40% of total catches at the peak of the fishery in 1997 and 1998 ( 3700 tonnes). Catches have been stable at a much lower level of about 2 100 tonnes in recent years. Given the importance of mackerel scad in terms of catch weight and local food security, there is an urgent need for updated assessment. Stock assessment was carried out using a Bayesian approach to biomass dynamic modelling. In order to tackle the problem of a non-informative CPUE series, the intrinsic rate of increase, r, was estimated separately, and the ratio B-0/X, initial biomass relative to carrying capacity, was assumed based on available information. The results indicated that the current level of fishing is sustainable. The probability of collapse is low, particularly in the short-term, and it is likely that biomass may increase further above B-msy, indicating a healthy stock level. It would appear that it is relatively safe to increase catches even up to 4000 tonnes. However, the marginal posterior of r was almost identical to the prior, indicating that there is relatively low information content in CPUE. This was also the case in relation to B-0/X There have been substantial increases in fishing efficiency, which have not been adequately captured by the measure used for effort (days or trips), implying that the results may be overly optimistic and should be considered preliminary. (c) 2006 Elsevier B.V. All rights reserved.
Resumo:
Arachis pintoi and A. repens are legumes with a high forage value that are used to feed ruminants in consortium systems. Not only do they increase the persistence and quality of pastures, they are also used for ornamental and green cover. The objective of this study was to analyze microsatellite markers in order to access the genetic diversity of 65 forage peanut germplasm accessions in the section Caulorrhizae of the genus Arachis in the Jequitinhonha, São Francisco and Paranã River valleys of Brazil. Fifty-seven accessions of A. pintoi and eight of A. repens were analyzed using 17 microsatellites, and the observed heterozygosity (HO), expected heterozygosity (HE), number of alleles per locus, discriminatory power, and polymorphism information content were all estimated. Ten loci (58.8%) were polymorphic, and 125 alleles were found in total. The HE ranged from 0.30 to 0.94, and HO values ranged from 0.03 to 0.88. By using Bayesian analysis, the accessions were genetically differentiated into three gene pools. Neither the unweighted pair group method with arithmetic mean nor a neighbor-joining analysis clustered samples into species, origin, or collection area. These results reveal a very weak genetic structure that does not form defined clusters, and that there is a high degree of similarity between the two species.
Resumo:
The world of Computational Biology and Bioinformatics presently integrates many different expertise, including computer science and electronic engineering. A major aim in Data Science is the development and tuning of specific computational approaches to interpret the complexity of Biology. Molecular biologists and medical doctors heavily rely on an interdisciplinary expert capable of understanding the biological background to apply algorithms for finding optimal solutions to their problems. With this problem-solving orientation, I was involved in two basic research fields: Cancer Genomics and Enzyme Proteomics. For this reason, what I developed and implemented can be considered a general effort to help data analysis both in Cancer Genomics and in Enzyme Proteomics, focusing on enzymes which catalyse all the biochemical reactions in cells. Specifically, as to Cancer Genomics I contributed to the characterization of intratumoral immune microenvironment in gastrointestinal stromal tumours (GISTs) correlating immune cell population levels with tumour subtypes. I was involved in the setup of strategies for the evaluation and standardization of different approaches for fusion transcript detection in sarcomas that can be applied in routine diagnostic. This was part of a coordinated effort of the Sarcoma working group of "Alleanza Contro il Cancro". As to Enzyme Proteomics, I generated a derived database collecting all the human proteins and enzymes which are known to be associated to genetic disease. I curated the data search in freely available databases such as PDB, UniProt, Humsavar, Clinvar and I was responsible of searching, updating, and handling the information content, and computing statistics. I also developed a web server, BENZ, which allows researchers to annotate an enzyme sequence with the corresponding Enzyme Commission number, the important feature fully describing the catalysed reaction. More to this, I greatly contributed to the characterization of the enzyme-genetic disease association, for a better classification of the metabolic genetic diseases.
Resumo:
In this thesis, we investigate the role of applied physics in epidemiological surveillance through the application of mathematical models, network science and machine learning. The spread of a communicable disease depends on many biological, social, and health factors. The large masses of data available make it possible, on the one hand, to monitor the evolution and spread of pathogenic organisms; on the other hand, to study the behavior of people, their opinions and habits. Presented here are three lines of research in which an attempt was made to solve real epidemiological problems through data analysis and the use of statistical and mathematical models. In Chapter 1, we applied language-inspired Deep Learning models to transform influenza protein sequences into vectors encoding their information content. We then attempted to reconstruct the antigenic properties of different viral strains using regression models and to identify the mutations responsible for vaccine escape. In Chapter 2, we constructed a compartmental model to describe the spread of a bacterium within a hospital ward. The model was informed and validated on time series of clinical measurements, and a sensitivity analysis was used to assess the impact of different control measures. Finally (Chapter 3) we reconstructed the network of retweets among COVID-19 themed Twitter users in the early months of the SARS-CoV-2 pandemic. By means of community detection algorithms and centrality measures, we characterized users’ attention shifts in the network, showing that scientific communities, initially the most retweeted, lost influence over time to national political communities. In the Conclusion, we highlighted the importance of the work done in light of the main contemporary challenges for epidemiological surveillance. In particular, we present reflections on the importance of nowcasting and forecasting, the relationship between data and scientific research, and the need to unite the different scales of epidemiological surveillance.
Resumo:
Tutkimuksen tavoitteena on tarkastella tekijöitä, joista ydinosaaminen muodostuu, sekä sitä kuinka yritykset voisivat parhaiten hyödyntää omia resurssejaan ja osaamistaan tunnistetun ydinosaamisen avulla. Teoria osuudessa käydään läpi kuinka ydinosaaminen on kirjallisuudessa määritelty ja miten yritykset voivat sen määritellä sisäisesti itselleen. Empiirisessä osiossa käydään läpi Telecom Business Research Centerissä tehdyn kvantitatiivisen selvityksen pohjalta valitut kolme sisällöntuottaja case - yritystä sekä kuvataan näiden osaamista. Tiedot yrityksistä perustuvat niiden edustajille tehtyihin haastatteluihin ja heidän käsitykseensä omasta yrityksestään. Tämä näkemys on tutkimuksen kannalta äärimmäisen relevanttia, koska ydinosaamisen määrittely tehdään yrityksessä sisäisesti juuri haastatellun kaltaisten yrityksen ydintoimijoiden toimesta. Varsinaisten case -yritysten lisäksi käydään läpi käytännön tapaus action-oriented -tutkimusosuudessa. Tutkimusta ja siinä käsiteltyjä esimerkkejä tulisi hyödyntää yrityksen oman ydinosaamisselvityksen apuna prosessin varrella.
Resumo:
Esitys KDK-käytettävyystyöryhmän järjestämässä seminaarissa: Miten käyttäjien toiveet haastavat metatietokäytäntöjämme? / How users' expectations challenge our metadata practices? 30.9.2014.
Resumo:
OBJECTIVES: To determine the prevalence of false or misleading statements in messages posted by internet cancer support groups and whether these statements were identified as false or misleading and corrected by other participants in subsequent postings. DESIGN: Analysis of content of postings. SETTING: Internet cancer support group Breast Cancer Mailing List. MAIN OUTCOME MEASURES: Number of false or misleading statements posted from 1 January to 23 April 2005 and whether these were identified and corrected by participants in subsequent postings. RESULTS: 10 of 4600 postings (0.22%) were found to be false or misleading. Of these, seven were identified as false or misleading by other participants and corrected within an average of four hours and 33 minutes (maximum, nine hours and nine minutes). CONCLUSIONS: Most posted information on breast cancer was accurate. Most false or misleading statements were rapidly corrected by participants in subsequent postings.
Resumo:
Information-centric networking (ICN) enables communication in isolated islands, where fixed infrastructure is not available, but also supports seamless communication if the infrastructure is up and running again. In disaster scenarios, when a fixed infrastructure is broken, content discovery algorit hms are required to learn what content is locally available. For example, if preferred content is not available, users may also be satisfied with second best options. In this paper, we describe a new content discovery algorithm and compare it to existing Depth-first and Breadth-first traversal algorithms. Evaluations in mobile scenarios with up to 100 nodes show that it results in better performance, i.e., faster discovery time and smaller traffic overhead, than existing algorithms.
Resumo:
The shift from host-centric to information-centric networking (ICN) promises seamless communication in mobile networks. However, most existing works either consider well-connected networks with high node density or introduce modifications to {ICN} message processing for delay-tolerant Networking (DTN). In this work, we present agent-based content retrieval, which provides information-centric {DTN} support as an application module without modifications to {ICN} message processing. This enables flexible interoperability in changing environments. If no content source can be found via wireless multi-hop routing, requesters may exploit the mobility of neighbor nodes (called agents) by delegating content retrieval to them. Agents that receive a delegation and move closer to content sources can retrieve data and return it back to requesters. We show that agent-based content retrieval may be even more efficient in scenarios where multi-hop communication is possible. Furthermore, we show that broadcast communication may not be necessarily the best option since dynamic unicast requests have little overhead and can better exploit short contact times between nodes (no broadcast delays required for duplicate suppression).
Resumo:
Turner-Fairbank Highway Research Center, McLean, Va.
Resumo:
Objective: This study (a) evaluated the reading ability of patients following stroke and their carers and the reading level and content and design characteristics of the written information provided to them, (b) explored the influence of sociodemographic and clinical characteristics on patients' reading ability, and (c) described an education package that provides well-designed information tailored to patients' and carers' informational needs. Methods: Fifty-seven patients and 12 carers were interviewed about their informational needs in an acute stroke unit. Their reading ability was assessed using the Rapid Estimate of Adult Literacy in Medicine (REALM). The written information provided to them in the acute stroke unit was analysed using the SMOG readability formula and the Suitability Assessment of Materials (SAM). Results: Thirteen (22.8%) patients and 5 (41.7%) carers had received written stroke information. The mean reading level of materials analysed was 11th grade while patients read at a mean of 7-8th grade. Most materials (89%) scored as only adequate in content and design. Patients with combined aphasia read significantly lower (4-6th grade) than other patients (p = 0.001). Conclusion: Only a small proportion of patients and carers received written materials about stroke and the readability level and content and design characteristics of most materials required improvement. Practice implications: When developing and distributing written materials about stroke, health professionals should consider the reading ability and informational needs of the recipients, and the reading level and content and design characteristics of the written materials. A computer system can be used to generate written materials tailored to the informational needs and literacy skills of patients and carers. (c) 2005 Elsevier Ireland Ltd. All rights reserved.
Resumo:
Ensuring the security of corporate information, that is increasingly stored, processed and disseminated using information and communications technologies [ICTs], has become an extremely complex and challenging activity. This is a particularly important concern for knowledge-intensive organisations, such as universities, as the effective conduct of their core teaching and research activities is becoming ever more reliant on the availability, integrity and accuracy of computer-based information resources. One increasingly important mechanism for reducing the occurrence of security breaches, and in so doing, protecting corporate information, is through the formulation and application of a formal information security policy (InSPy). Whilst a great deal has now been written about the importance and role of the information security policy, and approaches to its formulation and dissemination, there is relatively little empirical material that explicitly addresses the structure or content of security policies. The broad aim of the study, reported in this paper, is to fill this gap in the literature by critically examining the structure and content of authentic information security policies, rather than simply making general prescriptions about what they ought to contain. Having established the structure and key features of the reviewed policies, the paper critically explores the underlying conceptualisation of information security embedded in the policies. There are two important conclusions to be drawn from this study: (1) the wide diversity of disparate policies and standards in use is unlikely to foster a coherent approach to security management; and (2) the range of specific issues explicitly covered in university policies is surprisingly low, and reflects a highly techno-centric view of information security management.
Resumo:
This paper presents the design and results of a task-based user study, based on Information Foraging Theory, on a novel user interaction framework - uInteract - for content-based image retrieval (CBIR). The framework includes a four-factor user interaction model and an interactive interface. The user study involves three focused evaluations, 12 simulated real life search tasks with different complexity levels, 12 comparative systems and 50 subjects. Information Foraging Theory is applied to the user study design and the quantitative data analysis. The systematic findings have not only shown how effective and easy to use the uInteract framework is, but also illustrate the value of Information Foraging Theory for interpreting user interaction with CBIR. © 2011 Springer-Verlag Berlin Heidelberg.