51 resultados para Geometric Semantic Genetic Programming


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Sickle cell disease (SCD) is a genetic disorder with recessive transmission, caused by the mutation HBB:c.20A>T. It originates hemoglobin S that forms polymers inside the erythrocyte, upon deoxygenation, deforming it and ultimately leading to premature hemolysis. The disease presents with high heterogeneity of clinical manifestations, the most devastating of which, ischemic stroke, occurs in 11% of patients until 20 years of age. In this study, we tried to identify genetic modifiers of risk and episodes of stroke by studying 66 children with SCD, grouped according to the degree of cerebral vasculopathy (Stroke, Risk and Control). Association studies were performed between the three phenotypic groups and hematological and biochemical parameters of patients, as well as with 23 polymorphic regions in genes related to vascular cell adhesion (VCAM-1, THBS-1 and CD36), vascular tonus (NOS3 and ET-1) and inflammation (TNF-α and HMOX-1). Relevant data was collected from patient’s medical records. Known genetic modulators of SCD (beta-globin cluster haplotype and HBA and BCL11A genotypes) and putative genetic modifiers of cerebral vasculopathy were characterized. Differences in their distribution among groups were assessed. VCAM-1 rs1409419 allele C and NOS3 rs207044 allele C were associated to stroke events, while VCAM-1 rs1409419 allele T was found to be protective. Alleles 4a and 4b of NOS3 27 bp VNTR appeared to be respectively associated to stroke risk and protection. HMOX-1 longer STRs seemed to predispose to stroke. Higher hemoglobin F levels were found in Control group, as a result of Senegal haplotype or of BCL11A rs11886868 allele T, and higher lactate dehydrogenase levels, marker of hemolysis, were found in Risk group. Molecular mechanisms underlying the modifier functions of the relevant genetic variants are discussed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Based in internet growth, through semantic web, together with communication speed improvement and fast development of storage device sizes, data and information volume rises considerably every day. Because of this, in the last few years there has been a growing interest in structures for formal representation with suitable characteristics, such as the possibility to organize data and information, as well as the reuse of its contents aimed for the generation of new knowledge. Controlled Vocabulary, specifically Ontologies, present themselves in the lead as one of such structures of representation with high potential. Not only allow for data representation, as well as the reuse of such data for knowledge extraction, coupled with its subsequent storage through not so complex formalisms. However, for the purpose of assuring that ontology knowledge is always up to date, they need maintenance. Ontology Learning is an area which studies the details of update and maintenance of ontologies. It is worth noting that relevant literature already presents first results on automatic maintenance of ontologies, but still in a very early stage. Human-based processes are still the current way to update and maintain an ontology, which turns this into a cumbersome task. The generation of new knowledge aimed for ontology growth can be done based in Data Mining techniques, which is an area that studies techniques for data processing, pattern discovery and knowledge extraction in IT systems. This work aims at proposing a novel semi-automatic method for knowledge extraction from unstructured data sources, using Data Mining techniques, namely through pattern discovery, focused in improving the precision of concept and its semantic relations present in an ontology. In order to verify the applicability of the proposed method, a proof of concept was developed, presenting its results, which were applied in building and construction sector.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Despite the extensive literature in finding new models to replace the Markowitz model or trying to increase the accuracy of its input estimations, there is less studies about the impact on the results of using different optimization algorithms. This paper aims to add some research to this field by comparing the performance of two optimization algorithms in drawing the Markowitz Efficient Frontier and in real world investment strategies. Second order cone programming is a faster algorithm, appears to be more efficient, but is impossible to assert which algorithm is better. Quadratic Programming often shows superior performance in real investment strategies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the current global and competitive business context, it is essential that enterprises adapt their knowledge resources in order to smoothly interact and collaborate with others. However, due to the existent multiculturalism of people and enterprises, there are different representation views of business processes or products, even inside a same domain. Consequently, one of the main problems found in the interoperability between enterprise systems and applications is related to semantics. The integration and sharing of enterprises knowledge to build a common lexicon, plays an important role to the semantic adaptability of the information systems. The author proposes a framework to support the development of systems to manage dynamic semantic adaptability resolution. It allows different organisations to participate in a common knowledge base building, letting at the same time maintain their own views of the domain, without compromising the integration between them. Thus, systems are able to be aware of new knowledge, and have the capacity to learn from it and to manage its semantic interoperability in a dynamic and adaptable way. The author endorses the vision that in the near future, the semantic adaptability skills of the enterprise systems will be the booster to enterprises collaboration and the appearance of new business opportunities.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Based on the report for the unit “Sociology of New Information Technologies” of the Master on Computer Sciences at FCT/University Nova Lisbon in 2015-16. The responsible of this curricular unit is Prof. António Moniz

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Ship tracking systems allow Maritime Organizations that are concerned with the Safety at Sea to obtain information on the current location and route of merchant vessels. Thanks to Space technology in recent years the geographical coverage of the ship tracking platforms has increased significantly, from radar based near-shore traffic monitoring towards a worldwide picture of the maritime traffic situation. The long-range tracking systems currently in operations allow the storage of ship position data over many years: a valuable source of knowledge about the shipping routes between different ocean regions. The outcome of this Master project is a software prototype for the estimation of the most operated shipping route between any two geographical locations. The analysis is based on the historical ship positions acquired with long-range tracking systems. The proposed approach makes use of a Genetic Algorithm applied on a training set of relevant ship positions extracted from the long-term storage tracking database of the European Maritime Safety Agency (EMSA). The analysis of some representative shipping routes is presented and the quality of the results and their operational applications are assessed by a Maritime Safety expert.