963 resultados para non-human primate
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
O músculo diafragma, encontrado apenas nos mamíferos, é o principal músculo no processo respiratório, servindo de fronteira entre as cavidades torácica e abdominal. Sua importância também ganha destaque em pesquisas realizadas no âmbito dos enxertos, empregando-se diversos tipos de membranas biológicas para o reparo de defeitos diafragmáticos, os quais podem gerar hérnias diafragmáticas. Apesar de muitos estudos já conduzidos para com os primatas não humanos, especialmente no que tange a espécie do novo mundo Callithrix jacchus (Sagui-de-tufo-branco), oriundo do nordeste brasileiro, as pesquisas envolvendo o uso do diafragma em tal espécie é inexistente. Deste modo objetivou-se caracterizar a morfologia e a biometria do diafragma na espécie Callithrix jacchus de ambos os sexos, analisando possíveis divergências estruturais entre machos e fêmeas. Para tal foram utilizados quatros animais, 2 machos e 2 fêmeas, adultos, que vieram a óbito por causas naturais, provenientes de um criadouro comercial. Após fixação em solução de formaldeído 10% os animais foram devidamente dissecados para fotodocumentação e em seguida o diafragma coletado para efetuação da biometria (comprimento e largura) com o uso de um paquímetro e para o processamento histológico por meio da coloração de hematoxilina-eosina e tricrômio de masson, da porção muscular. As mensurações feitas permitiram concluir que não houve diferenças signifcativas entre machos e femeas. A topografia e a presença de três aberturas (forame da veia cava caudal, hiato aórtico e esofágico) na extensão do diafragma corroboram com descrições na literatura classica para outros mamíferos. A presença de um centro tendíneo em "V" difere do encontrado para animais como o peixe-boi e porquinho-da-india, mas é similar ao encontrado para o gambá-de-orelhas-brancas e rato albino. No que diz respeito aos achados histológicos conclui-se que as fibras musculares estão dispostas de forma organizada, apresentam diâmetro grande e núcleos basais, tendo, portanto, características similares do músculo estriado esquelético tanto nos animais machos como nas fêmeas.
Resumo:
This conference paper serves to examine the evolutionary linkages of a brachiating ancestor in humans, the biomechanical and neurophysiology of modern day brachiators, and the human rediscovery of this form of locomotion. Brachiation is arguably one of the most metabolically effective modes of travel by any organism and can be observed most meritoriously in Gibbons. The purpose of the research conducted for this paper was to encourage further exploration of the neurophysiological similarities and differences between humans and non-human primates. The hope is that in spurring more interest and research in this area, further possibilities for rehabilitating brain injury will be developed, or even theories on how to better train our athletes, using the biomechanics and neurophysiology of brachiation as a guide.
Resumo:
Haldane (1935) developed a method for estimating the male-to-female ratio of mutation rate ($\alpha$) by using sex-linked recessive genetic disease, but in six different studies using hemophilia A data the estimates of $\alpha$ varied from 1.2 to 29.3. Direct genomic sequencing is a better approach, but it is laborious and not readily applicable to non-human organisms. To study the sex ratios of mutation rate in various mammals, I used an indirect method proposed by Miyata et al. (1987). This method takes advantage of the fact that different chromosomes segregate differently between males and females, and uses the ratios of mutation rate in sequences on different chromosomes to estimate the male-to-female ratio of mutation rate. I sequenced the last intron of ZFX and ZFY genes in 6 species of primates and 2 species of rodents; I also sequenced the partial genomic sequence of the Ube1x and Ube1y genes of mice and rats. The purposes of my study in addition to estimation of $\alpha$'s in different mammalian species, are to test the hypothesis that most mutations are replication dependent and to examine the generation-time effect on $\alpha$. The $\alpha$ value estimated from the ZFX and ZFY introns of the six primate specise is ${\sim}$6. This estimate is the same as an earlier estimate using only 4 species of primates, but the 95% confidence interval has been reduced from (2, 84) to (2, 33). The estimate of $\alpha$ in the rodents obtained from Zfx and Zfy introns is ${\sim}$1.9, and that deriving from Ube1x and Ube1y introns is ${\sim}$2. Both estimates have a 95% confidence interval from 1 to 3. These two estimates are very close to each other, but are only one-third of that of the primates, suggesting a generation-time effect on $\alpha$. An $\alpha$ of 6 in primates and 2 in rodents are close to the estimates of the male-to-female ratio of the number of germ-cell divisions per generation in humans and mice, which are 6 and 2, respectively, assuming the generation time in humans is 20 years and that in mice is 5 months. These findings suggest that errors during germ-cell DNA replication are the primary source of mutation and that $\alpha$ decreases with decreasing length of generation time. ^
Resumo:
In the present review, we deliver an overview of the involvement of metabotropic glutamate receptor 5 (mGluR5) activity and density in pathological anxiety, mood disorders and addiction. Specifically, we will describe mGluR5 studies in humans that employed Positron Emission Tomography (PET) and combined the findings with preclinical animal research. This combined view of different methodological approaches-from basic neurobiological approaches to human studies-might give a more comprehensive and clinically relevant view of mGluR5 function in mental health than the view on preclinical data alone. We will also review the current research data on mGluR5 along the Research Domain Criteria (RDoC). Firstly, we found evidence of abnormal glutamate activity related to the positive and negative valence systems, which would suggest that antagonistic mGluR5 intervention has prominent anti-addictive, anti-depressive and anxiolytic effects. Secondly, there is evidence that mGluR5 plays an important role in systems for social functioning and the response to social stress. Finally, mGluR5's important role in sleep homeostasis suggests that this glutamate receptor may play an important role in RDoC's arousal and modulatory systems domain. Glutamate was previously mostly investigated in non-human studies, however initial human clinical PET research now also supports the hypothesis that, by mediating brain excitability, neuroplasticity and social cognition, abnormal metabotropic glutamate activity might predispose individuals to a broad range of psychiatric problems.
Resumo:
Tumor necrosis factor-related apoptosis-inducing ligand (Apo2L/TRAIL) is a member of the TNF superfamily of cytokines that can induce cell death through engagement of cognate death receptors. Unlike other death receptor ligands, it selectively kills tumor cells while sparing normal cells. Preclinical studies in non-human primates have generated much enthusiasm regarding its therapeutic potential. However, many human cancer cell lines exhibit significant resistance to TRAIL-induced apoptosis, and the molecular mechanisms underling this are controversial. Possible explanations are typically cell-type dependent, but include alterations of receptor expression, enhancement of pro-apoptotic intracellular signaling molecules, and reductions in anti-apoptotic proteins. We show here that the proteasome inhibitor bortezomib (Velcade, PS-341) produces synergistic apoptosis in both bladder and prostate cancer cell lines within 4-6 hours when co-treated with recombinant human TRAIL which is associated with accumulation of p21 and cdk1/2 inhibition. Our data suggest that bortezomib's mechanism of action involves a p21-dependent enhancement of caspase maturation. Furthermore, we found enhanced tumor cell death in in vivo models using athymic nude mice. This is associated with increases in caspase-8 and caspase-3 cleavage as well as significant reductions in microvessel density (MVD) and proliferation. Although TRAIL alone had less of an effect, its biological significance as a single agent requires further investigations. Toxicity studies reveal that the combination of bortezomib and rhTRAIL has fatal consequences that can be circumvented by altering treatment schedules. Based on our findings, we conclude that this strategy has significant therapeutic potential as an anti-cancer agent. ^
Resumo:
Chlamydia pneumoniae is an obligate intracellular respiratory pathogen that causes 10% of community-acquired pneumonia and has been associated with cardiovascular disease. Both whole-genome sequencing and specific gene typing suggest that there is relatively little genetic variation in human isolates of C. pneumoniae. To date, there has been little genomic analysis of strains from human cardiovascular sites. The genotypes of C. pneumoniae present in human atherosclerotic carotid plaque were analysed and several polymorphisms in the variable domain 4 (VD4) region of the outer-membrane protein-A (ompA) gene and the intergenic region between the ygeD and uridine kinase (ygeD-urk) genes were found. While one genotype was identified that was the same as one reported previously in humans (respiratory and cardiovascular), another genotype was found that was identical to a genotype from non-human sources (frog/koala).
Resumo:
Risk management in healthcare represents a group of various complex actions, implemented to improve the quality of healthcare services and guarantee the patients safety. Risks cannot be eliminated, but it can be controlled with different risk assessment methods derived from industrial applications and among these the Failure Mode Effect and Criticality Analysis (FMECA) is a largely used methodology. The main purpose of this work is the analysis of failure modes of the Home Care (HC) service provided by local healthcare unit of Naples (ASL NA1) to focus attention on human and non human factors according to the organization framework selected by WHO. © Springer International Publishing Switzerland 2014.
Resumo:
This project posits a link between representations of animals or animality and representations of illness in the Victorian novel, and examines the narrative uses and ideological consequences of such representations. Figurations of animality and illness in Victorian fiction have been examined extensively as distinct phenomena, but examining their connection allows for a more complex view of the role of sympathy in the Victorian novel. The commonplace in novel criticism is that Victorian authors, whether effectively or not, constructed their novels with a view to the expansion of sympathy. This dissertation intervenes in the discussion of the Victorian novel as a vehicle for sympathy by positing that texts and scenes in which representations of illness and animality are conjoined reveal where the novel draws the boundaries of the human, and the often surprising limits it sets on sympathetic feeling. In such moments, textual cues train or direct readerly sympathies in ways that suggest a particular definition of the human, but that direction of sympathy is not always towards an enlarged sympathy, or an enlarged definition of the human. There is an equally (and increasingly) powerful antipathetic impulse in many of these texts, which estranges readerly sympathy from putatively deviant, degenerate, or dangerous groups. These two opposing impulses—the sympathetic and the antipathetic—often coexist in the same novel or even the same scene, creating an ideological and affective friction, and both draw on the same tropes of illness and animality. Examining the intersection of these different discourses—sympathy, illness, and animality-- in these novels reveals the way that major Victorian debates about human nature, evolution and degeneration, and moral responsibility shaped the novels of the era as vehicles for both antipathy and sympathy. Focusing on the novels of the Brontës and Thomas Hardy, this dissertation examines in depth the interconnected ways that representations of animals and animality and representations of illness function in the Victorian novel, as they allow authors to explore or redefine the boundary between the human and the non-human, the boundary between sympathy and antipathy, and the limits of sympathy itself.
Resumo:
Abstract Scheduling problems are generally NP-hard combinatorial problems, and a lot of research has been done to solve these problems heuristically. However, most of the previous approaches are problem-specific and research into the development of a general scheduling algorithm is still in its infancy. Mimicking the natural evolutionary process of the survival of the fittest, Genetic Algorithms (GAs) have attracted much attention in solving difficult scheduling problems in recent years. Some obstacles exist when using GAs: there is no canonical mechanism to deal with constraints, which are commonly met in most real-world scheduling problems, and small changes to a solution are difficult. To overcome both difficulties, indirect approaches have been presented (in [1] and [2]) for nurse scheduling and driver scheduling, where GAs are used by mapping the solution space, and separate decoding routines then build solutions to the original problem. In our previous indirect GAs, learning is implicit and is restricted to the efficient adjustment of weights for a set of rules that are used to construct schedules. The major limitation of those approaches is that they learn in a non-human way: like most existing construction algorithms, once the best weight combination is found, the rules used in the construction process are fixed at each iteration. However, normally a long sequence of moves is needed to construct a schedule and using fixed rules at each move is thus unreasonable and not coherent with human learning processes. When a human scheduler is working, he normally builds a schedule step by step following a set of rules. After much practice, the scheduler gradually masters the knowledge of which solution parts go well with others. He can identify good parts and is aware of the solution quality even if the scheduling process is not completed yet, thus having the ability to finish a schedule by using flexible, rather than fixed, rules. In this research we intend to design more human-like scheduling algorithms, by using ideas derived from Bayesian Optimization Algorithms (BOA) and Learning Classifier Systems (LCS) to implement explicit learning from past solutions. BOA can be applied to learn to identify good partial solutions and to complete them by building a Bayesian network of the joint distribution of solutions [3]. A Bayesian network is a directed acyclic graph with each node corresponding to one variable, and each variable corresponding to individual rule by which a schedule will be constructed step by step. The conditional probabilities are computed according to an initial set of promising solutions. Subsequently, each new instance for each node is generated by using the corresponding conditional probabilities, until values for all nodes have been generated. Another set of rule strings will be generated in this way, some of which will replace previous strings based on fitness selection. If stopping conditions are not met, the Bayesian network is updated again using the current set of good rule strings. The algorithm thereby tries to explicitly identify and mix promising building blocks. It should be noted that for most scheduling problems the structure of the network model is known and all the variables are fully observed. In this case, the goal of learning is to find the rule values that maximize the likelihood of the training data. Thus learning can amount to 'counting' in the case of multinomial distributions. In the LCS approach, each rule has its strength showing its current usefulness in the system, and this strength is constantly assessed [4]. To implement sophisticated learning based on previous solutions, an improved LCS-based algorithm is designed, which consists of the following three steps. The initialization step is to assign each rule at each stage a constant initial strength. Then rules are selected by using the Roulette Wheel strategy. The next step is to reinforce the strengths of the rules used in the previous solution, keeping the strength of unused rules unchanged. The selection step is to select fitter rules for the next generation. It is envisaged that the LCS part of the algorithm will be used as a hill climber to the BOA algorithm. This is exciting and ambitious research, which might provide the stepping-stone for a new class of scheduling algorithms. Data sets from nurse scheduling and mall problems will be used as test-beds. It is envisaged that once the concept has been proven successful, it will be implemented into general scheduling algorithms. It is also hoped that this research will give some preliminary answers about how to include human-like learning into scheduling algorithms and may therefore be of interest to researchers and practitioners in areas of scheduling and evolutionary computation. References 1. Aickelin, U. and Dowsland, K. (2003) 'Indirect Genetic Algorithm for a Nurse Scheduling Problem', Computer & Operational Research (in print). 2. Li, J. and Kwan, R.S.K. (2003), 'Fuzzy Genetic Algorithm for Driver Scheduling', European Journal of Operational Research 147(2): 334-344. 3. Pelikan, M., Goldberg, D. and Cantu-Paz, E. (1999) 'BOA: The Bayesian Optimization Algorithm', IlliGAL Report No 99003, University of Illinois. 4. Wilson, S. (1994) 'ZCS: A Zeroth-level Classifier System', Evolutionary Computation 2(1), pp 1-18.
Resumo:
Abstract Scheduling problems are generally NP-hard combinatorial problems, and a lot of research has been done to solve these problems heuristically. However, most of the previous approaches are problem-specific and research into the development of a general scheduling algorithm is still in its infancy. Mimicking the natural evolutionary process of the survival of the fittest, Genetic Algorithms (GAs) have attracted much attention in solving difficult scheduling problems in recent years. Some obstacles exist when using GAs: there is no canonical mechanism to deal with constraints, which are commonly met in most real-world scheduling problems, and small changes to a solution are difficult. To overcome both difficulties, indirect approaches have been presented (in [1] and [2]) for nurse scheduling and driver scheduling, where GAs are used by mapping the solution space, and separate decoding routines then build solutions to the original problem. In our previous indirect GAs, learning is implicit and is restricted to the efficient adjustment of weights for a set of rules that are used to construct schedules. The major limitation of those approaches is that they learn in a non-human way: like most existing construction algorithms, once the best weight combination is found, the rules used in the construction process are fixed at each iteration. However, normally a long sequence of moves is needed to construct a schedule and using fixed rules at each move is thus unreasonable and not coherent with human learning processes. When a human scheduler is working, he normally builds a schedule step by step following a set of rules. After much practice, the scheduler gradually masters the knowledge of which solution parts go well with others. He can identify good parts and is aware of the solution quality even if the scheduling process is not completed yet, thus having the ability to finish a schedule by using flexible, rather than fixed, rules. In this research we intend to design more human-like scheduling algorithms, by using ideas derived from Bayesian Optimization Algorithms (BOA) and Learning Classifier Systems (LCS) to implement explicit learning from past solutions. BOA can be applied to learn to identify good partial solutions and to complete them by building a Bayesian network of the joint distribution of solutions [3]. A Bayesian network is a directed acyclic graph with each node corresponding to one variable, and each variable corresponding to individual rule by which a schedule will be constructed step by step. The conditional probabilities are computed according to an initial set of promising solutions. Subsequently, each new instance for each node is generated by using the corresponding conditional probabilities, until values for all nodes have been generated. Another set of rule strings will be generated in this way, some of which will replace previous strings based on fitness selection. If stopping conditions are not met, the Bayesian network is updated again using the current set of good rule strings. The algorithm thereby tries to explicitly identify and mix promising building blocks. It should be noted that for most scheduling problems the structure of the network model is known and all the variables are fully observed. In this case, the goal of learning is to find the rule values that maximize the likelihood of the training data. Thus learning can amount to 'counting' in the case of multinomial distributions. In the LCS approach, each rule has its strength showing its current usefulness in the system, and this strength is constantly assessed [4]. To implement sophisticated learning based on previous solutions, an improved LCS-based algorithm is designed, which consists of the following three steps. The initialization step is to assign each rule at each stage a constant initial strength. Then rules are selected by using the Roulette Wheel strategy. The next step is to reinforce the strengths of the rules used in the previous solution, keeping the strength of unused rules unchanged. The selection step is to select fitter rules for the next generation. It is envisaged that the LCS part of the algorithm will be used as a hill climber to the BOA algorithm. This is exciting and ambitious research, which might provide the stepping-stone for a new class of scheduling algorithms. Data sets from nurse scheduling and mall problems will be used as test-beds. It is envisaged that once the concept has been proven successful, it will be implemented into general scheduling algorithms. It is also hoped that this research will give some preliminary answers about how to include human-like learning into scheduling algorithms and may therefore be of interest to researchers and practitioners in areas of scheduling and evolutionary computation. References 1. Aickelin, U. and Dowsland, K. (2003) 'Indirect Genetic Algorithm for a Nurse Scheduling Problem', Computer & Operational Research (in print). 2. Li, J. and Kwan, R.S.K. (2003), 'Fuzzy Genetic Algorithm for Driver Scheduling', European Journal of Operational Research 147(2): 334-344. 3. Pelikan, M., Goldberg, D. and Cantu-Paz, E. (1999) 'BOA: The Bayesian Optimization Algorithm', IlliGAL Report No 99003, University of Illinois. 4. Wilson, S. (1994) 'ZCS: A Zeroth-level Classifier System', Evolutionary Computation 2(1), pp 1-18.
Resumo:
Stem cell therapy for ischaemic stroke is an emerging field in light of an increasing number of patients surviving with permanent disability. Several allogenic and autologous cells types are now in clinical trials with preliminary evidence of safety. Some clinical studies have reported functional improvements in some patients. After initial safety evaluation in a Phase 1 study, the conditionally immortalised human neural stem cell line CTX0E03 is currently in a Phase 2 clinical trial (PISCES-II). Previous pre-clinical studies conducted by ReNeuron Ltd, showed evidence of functional recovery in the Bilateral Asymmetry test up to 6 weeks following transplantation into rodent brain, 4 weeks after middle cerebral artery occlusion. Resting-state fMRI is increasingly used to investigate brain function in health and disease, and may also act as a predictor of recovery due to known network changes in the post-stroke recovery period. Resting-state methods have also been applied to non-human primates and rodents which have been found to have analogous resting-state networks to humans. The sensorimotor resting-state network of rodents is impaired following experimental focal ischaemia of the middle cerebral artery territory. However, the effects of stem cell implantation on brain functional networks has not previously been investigated. Prior studies assessed sensorimotor function following sub-cortical implantation of CTX0E03 cells in the rodent post-stroke brain but with no MRI assessments of functional improvements. This thesis presents research on the effect of sub-cortical implantation of CTX0E03 cells on the resting- state sensorimotor network and sensorimotor deficits in the rat following experimental stroke, using protocols based on previous work with this cell line. The work in this thesis identified functional tests of appropriate sensitivity for long-term dysfunction suitable for this laboratory, and investigated non-invasive monitoring of physiological variables required to optimize BOLD signal stability within a high-field MRI scanner. Following experimental stroke, rats demonstrated expected sensorimotor dysfunction and changes in the resting-state sensorimotor network. CTX0E03 cells did not improve post-stroke functional outcome (compared to previous studies) and with no changes in resting-state sensorimotor network activity. However, in control animals, we observed changes in functional networks due to the stereotaxic procedure. This illustrates the sensitivity of resting-state fMRI to stereotaxic procedures. We hypothesise that the damage caused by cell or vehicle implantation may have prevented functional and network recovery which has not been previously identified due to the application of different functional tests. The findings in this thesis represent one of few pre-clinical studies in resting-state fMRI network changes post-stroke and the only to date applying this technique to evaluate functional outcomes following a clinically applicable human neural stem cell treatment for ischaemic stroke. It was found that injury caused by stereotaxic injection should be taken into account when assessing the effectiveness of treatment.
Resumo:
Comparative and evolutionary developmental analyses seek to discover the similarities and differences between humans and non-human species that illuminate both the evolutionary foundations of our nature that we share with other animals, and the distinctive characteristics that make human development unique. As our closest animal relatives, with whom we last shared common ancestry, non-human primates have beenparticularly important in this endeavour. Such studies that have focused on social learning, traditions, and culture have discovered much about the ‘how’ of social learning, concerned with key underlying processes such as imitation and emulation. One of the core discoveries is that the adaptive adjustment of social learning options to different contexts is not unique to human infants, therefore multiple new strands of research have begun to focus on more subtle questions about when, from whom, and why such learning occurs. Here we review illustrative studies on both human infants and young children and on non-human primates to identify the similarities shared more broadly across the primate order, and the apparent specialisms that distinguish human development. Adaptive biases in social learning discussed include those modulated by task comprehension, experience, conformity to majorities, and the age, skill, proficiency and familiarity of potential alternative cultural models.
Resumo:
Universidade Estadual de Campinas . Faculdade de Educação Física
Resumo:
A expansão da tríplice continência em unidades com quatro ou mais elementos abriu novas perspectivas para a compreensão de comportamentos complexos, como a emergência de respostas que derivam da formação de classes de estímulos equivalentes e que modelam comportamentos simbólicos e conceituais. Na investigação experimental, o procedimento de matching to sample tem sido frequentemente empregado para estabelecer discriminações condicionais. Em particular, a obtenção do matching de identidade generalizado é considerada demonstrativa da aquisição dos conceitos de igualdade e diferença. Segundo argumentamos, o fato de se buscar a compreensão desses conceitos a partir de processos discriminativos condicionais pode ter sido responsável pelos frequentes fracassos em demonstrá-los em sujeitos não humanos. A falta de correspondência entre os processos discriminativos responsáveis por estabelecer a relação de reflexividade entre estímulos que formam classes equivalentes e o matching de identidade generalizado, nesse sentido, é aqui revista ao longo de estudos empíricos e discutida com respeito às suas implicações.