28 resultados para real-life research
Resumo:
In the past decades drug discovery practice has escaped from the complexity of the formerly used phenotypic screening in animals to focus on assessing drug effects on isolated protein targets in the search for drugs that exclusively and potently hit one selected target, thought to be critical for a given disease, while not affecting at all any other target to avoid the occurrence of side-effects. However, reality does not conform to these expectations, and, conversely, this approach has been concurrent with increased attrition figures in late-stage clinical trials, precisely due to lack of efficacy and safety. In this context, a network biology perspective of human disease and treatment has burst into the drug discovery scenario to bring it back to the consideration of the complexity of living organisms and particularly of the (patho)physiological environment where protein targets are (mal)functioning and where drugs have to exert their restoring action. Under this perspective, it has been found that usually there is not one but several disease-causing genes and, therefore, not one but several relevant protein targets to be hit, which do not work on isolation but in a highly interconnected manner, and that most known drugs are inherently promiscuous. In this light, the rationale behind the currently prevailing single-target-based drug discovery approach might even seem a Utopia, while, conversely, the notion that the complexity of human disease must be tackled with complex polypharmacological therapeutic interventions constitutes a difficult-torefuse argument that is spurring the development of multitarget therapies.
Resumo:
In the past decades drug discovery practice has escaped from the complexity of the formerly used phenotypic screening in animals to focus on assessing drug effects on isolated protein targets in the search for drugs that exclusively and potently hit one selected target, thought to be critical for a given disease, while not affecting at all any other target to avoid the occurrence of side-effects. However, reality does not conform to these expectations, and, conversely, this approach has been concurrent with increased attrition figures in late-stage clinical trials, precisely due to lack of efficacy and safety. In this context, a network biology perspective of human disease and treatment has burst into the drug discovery scenario to bring it back to the consideration of the complexity of living organisms and particularly of the (patho)physiological environment where protein targets are (mal)functioning and where drugs have to exert their restoring action. Under this perspective, it has been found that usually there is not one but several disease-causing genes and, therefore, not one but several relevant protein targets to be hit, which do not work on isolation but in a highly interconnected manner, and that most known drugs are inherently promiscuous. In this light, the rationale behind the currently prevailing single-target-based drug discovery approach might even seem a Utopia, while, conversely, the notion that the complexity of human disease must be tackled with complex polypharmacological therapeutic interventions constitutes a difficult-torefuse argument that is spurring the development of multitarget therapies.
Resumo:
Hem realitzat l’estudi de moviments humans i hem buscat la forma de poder crear aquests moviments en temps real sobre entorns digitals de forma que la feina que han de dur a terme els artistes i animadors sigui reduïda. Hem fet un estudi de les diferents tècniques d’animació de personatges que podem trobar actualment en l’industria de l’entreteniment així com les principals línies de recerca, estudiant detingudament la tècnica més utilitzada, la captura de moviments. La captura de moviments permet enregistrar els moviments d’una persona mitjançant sensors òptics, sensors magnètics i vídeo càmeres. Aquesta informació és emmagatzemada en arxius que després podran ser reproduïts per un personatge en temps real en una aplicació digital. Tot moviment enregistrat ha d’estar associat a un personatge, aquest és el procés de rigging, un dels punts que hem treballat ha estat la creació d’un sistema d’associació de l’esquelet amb la malla del personatge de forma semi-automàtica, reduint la feina de l’animador per a realitzar aquest procés. En les aplicacions en temps real com la realitat virtual, cada cop més s’està simulant l’entorn en el que viuen els personatges mitjançant les lleis de Newton, de forma que tot canvi en el moviment d’un cos ve donat per l’aplicació d’una força sobre aquest. La captura de moviments no escala bé amb aquests entorns degut a que no és capaç de crear noves animacions realistes a partir de l’enregistrada que depenguin de l’interacció amb l’entorn. L’objectiu final del nostre treball ha estat realitzar la creació d’animacions a partir de forces tal i com ho fem en la realitat en temps real. Per a això hem introduït un model muscular i un sistema de balanç sobre el personatge de forma que aquest pugui respondre a les interaccions amb l’entorn simulat mitjançant les lleis de Newton de manera realista.
Resumo:
The aim of this article is to show the classical parameters of Shadowlands by R. Attenborough, with a screenplay by W. Nicholson, on C. S. Lewis's life and work. Based upon an accurate reading of Lewis's works, the author of this article proposes to interpret the opposition Lewis / Gresham as the translation into the real life of the opposition between the Platonic or idealistic and the Aristotelian or materialistic temperaments which was already maintained by Coleridge. In any case, there are many classical references which must be taken into account in order to understand to what extent C. S. Lewis's Christianity is also a classic Christianity, that is, a Greek and Latin one.
Resumo:
Background: Research in epistasis or gene-gene interaction detection for human complex traits has grown over the last few years. It has been marked by promising methodological developments, improved translation efforts of statistical epistasis to biological epistasis and attempts to integrate different omics information sources into the epistasis screening to enhance power. The quest for gene-gene interactions poses severe multiple-testing problems. In this context, the maxT algorithm is one technique to control the false-positive rate. However, the memory needed by this algorithm rises linearly with the amount of hypothesis tests. Gene-gene interaction studies will require a memory proportional to the squared number of SNPs. A genome-wide epistasis search would therefore require terabytes of memory. Hence, cache problems are likely to occur, increasing the computation time. In this work we present a new version of maxT, requiring an amount of memory independent from the number of genetic effects to be investigated. This algorithm was implemented in C++ in our epistasis screening software MBMDR-3.0.3. We evaluate the new implementation in terms of memory efficiency and speed using simulated data. The software is illustrated on real-life data for Crohn’s disease. Results: In the case of a binary (affected/unaffected) trait, the parallel workflow of MBMDR-3.0.3 analyzes all gene-gene interactions with a dataset of 100,000 SNPs typed on 1000 individuals within 4 days and 9 hours, using 999 permutations of the trait to assess statistical significance, on a cluster composed of 10 blades, containing each four Quad-Core AMD Opteron(tm) Processor 2352 2.1 GHz. In the case of a continuous trait, a similar run takes 9 days. Our program found 14 SNP-SNP interactions with a multiple-testing corrected p-value of less than 0.05 on real-life Crohn’s disease (CD) data. Conclusions: Our software is the first implementation of the MB-MDR methodology able to solve large-scale SNP-SNP interactions problems within a few days, without using much memory, while adequately controlling the type I error rates. A new implementation to reach genome-wide epistasis screening is under construction. In the context of Crohn’s disease, MBMDR-3.0.3 could identify epistasis involving regions that are well known in the field and could be explained from a biological point of view. This demonstrates the power of our software to find relevant phenotype-genotype higher-order associations.
Resumo:
Cualquier acción dinamizadora que tenga como objetivo la desestacionalización turística basada en la explotación del patrimonio de la finca pública de Son Real, debe sustentarse necesariamente en una política gestora explícita, dotada de contenidos e integral.Pero ante todo debe tener una base de conocimiento, de contextualización del territorio y debe recoger todas las incidencias que afectan al bien para desarrollar estrategias consecuentes destinadas al conocimiento, conservación y difusión de este patrimonio, ligándolo a las demandas y necesidades de la sociedad actual y futura.
Resumo:
El concepte d'alfabetització digital ha evolucionat per diverses vies al llarg del temps pel que fa a l'enfocament teòric emprat per a investigar les seves implicacions en l'estudi de la divisió digital de gènere en diversos contextos de la vida real. L'objectiu principal d'aquest document consisteix a fer servir un enfocament interdisciplinari per a analitzar algunes de les llacunes teòriques i empíriques presents en l'estudi de la divisió digital de gènere. S'analitzen alguns dels estudis empírics existents sobre aquesta qüestió i es proposen futures línies de recerca, amb l'objectiu de cobrir algunes de les llacunes en la recerca relacionada amb les implicacions de l'alfabetització digital en l'anàlisi de la divisió digital de gènere.
Resumo:
Cualquier acción dinamizadora que tenga como objetivo la desestacionalización turística basada en la explotación del patrimonio de la finca pública de Son Real, debe sustentarse necesariamente en una política gestora explícita, dotada de contenidos e integral.Pero ante todo debe tener una base de conocimiento, de contextualización del territorio y debe recoger todas las incidencias que afectan al bien para desarrollar estrategias consecuentes destinadas al conocimiento, conservación y difusión de este patrimonio, ligándolo a las demandas y necesidades de la sociedad actual y futura.
Resumo:
The present study builds on a previous proposal for assigning probabilities to the outcomes computed using different primary indicators in single-case studies. These probabilities are obtained comparing the outcome to previously tabulated reference values and reflect the likelihood of the results in case there was no intervention effect. The current study explores how well different metrics are translated into p values in the context of simulation data. Furthermore, two published multiple baseline data sets are used to illustrate how well the probabilities could reflect the intervention effectiveness as assessed by the original authors. Finally, the importance of which primary indicator is used in each data set to be integrated is explored; two ways of combining probabilities are used: a weighted average and a binomial test. The results indicate that the translation into p values works well for the two nonoverlap procedures, with the results for the regression-based procedure diverging due to some undesirable features of its performance. These p values, both when taken individually and when combined, were well-aligned with the effectiveness for the real-life data. The results suggest that assigning probabilities can be useful for translating the primary measure into the same metric, using these probabilities as additional evidence on the importance of behavioral change, complementing visual analysis and professional's judgments.
Resumo:
The literature on school choice assumes that families can submit a preference list over all the schools they want to be assigned to. However, in many real-life instances families are only allowed to submit a list containing a limited number of schools. Subjects' incentives are drastically affected, as more individuals manipulate their preferences. Including a safety school in the constrained list explains most manipulations. Competitiveness across schools play an important role. Constraining choices increases segregation and affects the stability and efficiency of the final allocation. Remarkably, the constraint reduces significantly the proportion of subjects playing a dominated strategy.
Resumo:
Aquest treball de recerca tracta de la relació existent entre pedagogia, traducció, llengües estrangeres i intel•ligències múltiples. El debat sobre si la traducció és una eina útil a la classe de llengües estrangeres és un tema actual sobre el qual molts investigadors encara indaguen. Estudis recents, però, han demostrat que qualsevol tasca de traducció -en la qual s’hi poden incloure treballs amb les diferents habilitats- és profitosa si la considerem un mitjà, no una finalitat en ella mateixa. Evidentment, l’ús de la traducció dins l’aula és avantatjosa, però també hem de tenir presents certs desavantatges d’aquesta aplicació. Un possible desavantatge podria ser la creença que, al principi, molta gent té referent a l’equivalència, paraula per paraula, d’una llengua vers una altra. Però després de presentar vàries tasques de traducció als estudiants, aquests poden arribar a controlar, fins i tot, les traduccions inconscients i poden assolir un cert nivell de precisió i flexibilitat que val la pena mencionar. Però l’avantatge principal és que s’enfronten a una activitat molt estesa dins la societat actual que combina dues llengües, la llengua materna i la llengua objecte d’estudi, per exemple. De tot això en podem deduir que utilitzar la llengua materna a la classe no s’ha de considerar un crim, com fins ara, sinó una virtut, evidentment si és emprada correctament. En aquest treball de recerca s’hi pot trobar una síntesi tant de les principals teories d’adquisició i aprenentatge de llengües com de les teories de traducció. A la pregunta de si les teories, tant de traducció com de llengües estrangeres, s’haurien d’ensenyar implícita o explícitament, es pot inferir que segons el nivell d’estudis on estiguin els aprenents els convindrà aprendre les teories explícitament o les aprendran, de totes maneres, implícitament. Com que qualsevol grup d’estudiants és heterogeni -és a dir que cada individu té un ritme i un nivell d’aprenentatge concret i sobretot cadascú té diferents estils de percepció (visual, auditiu, gustatiu, olfactiu, de moviment) i per tant diferents intel•ligències-, els professors ho han de tenir en compte a l’hora de planificar qualsevol programa d’actuació vers els alumnes. Per tant, podem concloure que les tasques o projectes de traducció poden ajudar als alumnes a aprendre millor, més eficaçment i a aconseguir un aprenentatge més significatiu.
Resumo:
La part del treball de recerca inclou un estudi que recull l'evolució de les bases de dades des del seu començament fins a l'actualitat, incloent-hi una previsió del futur de les bases de dades; finalment, com a aprofundiment, s'hi estudien les bases de dades multimèdia. D'altra banda, la part pràctica mira d'aplicar tots el coneixements adquirits durant l'estudi de recerca i durant les diferents assignatures de la carrera, i intenta obtenir una aplicació que es pugui fer servir en el món real.
Characterization of human gene expression changes after olive oil ingestion: an exploratory approach
Resumo:
Olive oil consumption is protective against risk factors for cardiovascular and cancer diseases. A nutrigenomic approach was performed to assess whether changes in gene expression could occur in human peripheral blood mononuclear cells after oli ve oil ingestion at postprandial state. Six healthy male volunteers ingested, at fasting state, 50 ml of olive oil. Prior to intervention a 1-week washout period with a controlled diet and sunflower oil as the only source of fat was followed. During the 3 days before and on the intervention day, a very low-phenolic compound diet was followed. At baseline (0 h) and at post-ingestion (6 h), total RNA was isolated and gene expression (29,082 genes) was evaluated by microarray. From microarray data, nutrient-gene interactions were observed in genes related to metabolism, cellular processes, cancer, and atherosclerosis (e.g. USP48 by 2.16; OGT by 1.68-fold change) and associated processes such as inflammation (e.g. AKAP13 by 2.30; IL-10 by 1.66-fold change) and DNA damage (e.g. DCLRE1C by 1.47; POLK by 1.44- fold change). When results obtained by microarray were verified by qRT-PCR in nine genes, full concordance was achieved only in the case of up-regulated genes. Changes were observed at a real-life dose of olive oil, as it is daily consumed in some Mediterranean areas. Our results support the hypothesis that postprandial protective changes related to olive oil consumption could be mediated through gene expression changes.
Resumo:
Dialogic learning and interactive groups have proved to be a useful methodological approach appliedin educational situations for lifelong adult learners. The principles of this approach stress theimportance of dialogue and equal participation also when designing the training activities. This paperadopts these principles as the basis for a configurable template that can be integrated in runtimesystems. The template is formulated as a meta-UoL which can be interpreted by IMS Learning Designplayers. This template serves as a guide to flexibly select and edit the activities at runtime (on the fly).The meta-UoL has been used successfully by a practitioner so as to create a real-life example, withpositive and encouraging results
Resumo:
From a managerial point of view, the more effcient, simple, and parameter-free (ESP) an algorithm is, the more likely it will be used in practice for solving real-life problems. Following this principle, an ESP algorithm for solving the Permutation Flowshop Sequencing Problem (PFSP) is proposed in this article. Using an Iterated Local Search (ILS) framework, the so-called ILS-ESP algorithm is able to compete in performance with other well-known ILS-based approaches, which are considered among the most effcient algorithms for the PFSP. However, while other similar approaches still employ several parameters that can affect their performance if not properly chosen, our algorithm does not require any particular fine-tuning process since it uses basic "common sense" rules for the local search, perturbation, and acceptance criterion stages of the ILS metaheuristic. Our approach defines a new operator for the ILS perturbation process, a new acceptance criterion based on extremely simple and transparent rules, and a biased randomization process of the initial solution to randomly generate different alternative initial solutions of similar quality -which is attained by applying a biased randomization to a classical PFSP heuristic. This diversification of the initial solution aims at avoiding poorly designed starting points and, thus, allows the methodology to take advantage of current trends in parallel and distributed computing. A set of extensive tests, based on literature benchmarks, has been carried out in order to validate our algorithm and compare it against other approaches. These tests show that our parameter-free algorithm is able to compete with state-of-the-art metaheuristics for the PFSP. Also, the experiments show that, when using parallel computing, it is possible to improve the top ILS-based metaheuristic by just incorporating to it our biased randomization process with a high-quality pseudo-random number generator.