955 resultados para Chromosomes, Human, Pair 16
Resumo:
Background. Limited data exist on human immunodeficiency virus (HIV)-infected individuals' ability to work after receiving combination antiretroviral therapy (cART). We aimed to investigate predictors of regaining full ability to work at 1 year after starting cART. Methods. Antiretroviral-naive HIV-infected individuals <60 years who started cART from January 1998 through December 2012 within the framework of the Swiss HIV Cohort Study were analyzed. Inability to work was defined as a medical judgment of the patient's ability to work as 0%. Results. Of 5800 subjects, 4382 (75.6%) were fully able to work, 471 (8.1%) able to work part time, and 947 (16.3%) were unable to work at baseline. Of the 947 patients unable to work, 439 (46.3%) were able to work either full time or part time at 1 year of treatment. Predictors of recovering full ability to work were non-white ethnicity (odds ratio [OR], 2.06; 95% confidence interval [CI], 1.20-3.54), higher education (OR, 4.03; 95% CI, 2.47-7.48), and achieving HIV-ribonucleic acid <50 copies/mL (OR, 1.83; 95% CI, 1.20-2.80). Older age (OR, 0.55; 95% CI, .42-.72, per 10 years older) and psychiatric disorders (OR, 0.24; 95% CI, .13-.47) were associated with lower odds of ability to work. Recovering full ability to work at 1 year increased from 24.0% in 1998-2001 to 41.2% in 2009-2012, but the employment rates did not increase. Conclusions. Regaining full ability to work depends primarily on achieving viral suppression, absence of psychiatric comorbidity, and favorable psychosocial factors. The discrepancy between patients' ability to work and employment rates indicates barriers to reintegration of persons infected with HIV.
Resumo:
Objective: In humans and other animals, open, expansive postures (compared to contracted postures) are evolutionary developed expressions of power and have been shown to cause neuroendocrine and behavioral changes (Carney, Cuddy, & Yap, 2010). In the present study we aimed to investigate whether power postures have a bearing on the participant’s facial appearance and whether others are able to distinguish faces after “high power posing” from faces after “low power posing”. Methods: 16 models were photographed 4-5 minutes after having adopted high and low power postures. Two different high power and two different low power postures were held for 2 minutes each. Power-posing sessions were performed on two consecutive days. High and low power photographs of each model were paired and an independent sample of 100 participants were asked to pick the more dominant and the more likeable face of each pair. Results: Photographs that were taken after adopting high power postures were chosen significantly more often as being more dominant looking. There was no preference when asked to choose the more likeable photograph (chance level). A further independent sample rated each photograph for head tilt, making it unlikely that dominance ratings were caused merely by the posture of the head. Consistently, facial width-to-height ratio did not differ between faces after high and low power posing. Conclusions: Postures associated with high power affect facial appearance, leading to a more dominant looking face. This finding may have implications for everyday life, for instance when a dominant appearance is needed.
Resumo:
Molecular mechanisms that underlie preleukemic myelodysplasia (MDS) and acute myelogenous leukemia (AML) are poorly understood. In MDS or AML with a refractory clinical course, more than 30% of patients have acquired interstitial or complete deletions of chromosome 5. The 5q13.3 chromosomal segment is commonly lost as the result of 5q deletion. Reciprocal and unbalanced translocations of 5q13.3 can also occur as sole anomalies associated with refractory AML or MDS. This study addresses the hypothesis that a critical gene at 5q13.3 functions either as a classical tumor suppressor or as a chromosomal translocation partner and contributes to leukemogenesis. ^ Previous studies from our laboratory delineated a critical region of loss to a 2.5–3.0Mb interval at 5q13.3 between microsatellite markers D5S672 and GATA-P18104. The critical region of loss was later resolved to an interval of approximately 2Mb between the markers D5S672 and D5S2029. I, then generated a long range physical map of yeast artificial chromosomes (YACs) and developed novel sequence tagged sites (STS). To enhance the resolution of this map, bacterial artificial chromosomes (BACs) were used to construct a triply linked contig across a 1 Mb interval. These BACs were used as probes for fluorescent in situ hybridization (FISH) on an AML cell line to define the 5q13.3 critical region. A 200kb BAC, 484a9, spans the translocation breakpoint in this cell line. A novel gene, SSDP2 (single stranded DNA binding protein), is disrupted at the breakpoint because its first four exons are encoded within 140kb of BAC 484a9. This finding suggests that SSDP2 is the critical gene at 5q13.3. ^ In addition, I made an observation that deletions of chromosome 5q13 co-segregate with loss of the chromosome 17p. In some cases the deletions result from unbalanced translocations between 5q13 and 17p13. It was confirmed that the TP53 gene is deleted in patients with 17p loss, and the remaining allele harbors somatic mutation. Thus, the genetic basis for the aggressive clinical course in AML and MDS may be caused by functional cooperation between deletion or disruption of the 5q13.3 critical gene and inactivation of TP53. ^
Resumo:
Infection with certain types of HPV is a necessary event in the development of cervical carcinoma; however, not all women who become infected will progress. While much is known about the molecular influence of HPV E6 and E7 proteins on the malignant transformation, little is known about the additional factors needed to drive the process. Currently, conventional cervical screening is insufficient at identifying women who are likely to progress from premalignant lesions to carcinoma. Aneuploidy and chromatin texture from image cytometry have been suggested as quantitative measures of nuclear damage in premalignant lesions and cancer, and traditional epidemiologic studies have identified potential factors to aid in the discrimination of those lesions likely to progress. ^ In the current study, real-time PCR was used to quantitate mRNA expression of the E7 gene in women exhibiting normal epithelium, LSIL, and HSIL. Quantitative cytometry was used to gather information about the DNA index and chromatin features of cells from the same women. Logistic regression modeling was used to establish predictor variables for histologic grade based on the traditional epidemiologic risk factors and molecular markers. ^ Prevalence of mRNA transcripts was lower among women with normal histology (27%) than for women with LSIL (40%) and HSIL (37%) with mean levels ranging from 2.0 to 4.2. The transcriptional activity of HPV 18 was higher than that of HPV 16 and increased with increasing level of dysplasia, reinforcing the more aggressive nature of HPV 18. DNA index and mRNA level increased with increasing histological grade. Chromatin score was not correlated with histology but was higher for HPV 18 samples and those with both HPV 18 and HPV 16. However, chromatin score and DNA index were not correlated with mRNA levels. The most predictive variables in the regression modeling were mRNA level, DNA index, parity, and age, and the ROC curves for LSIL and HSIL indicated excellent discrimination. ^ Real-time PCR of viral transcripts could provide a more efficient method to analyze the oncogenic potential within cells from cervical swabs. Epidemiological modeling of malignant progression in the cervix should include molecular markers, as well as the traditional epidemiological risk factors. ^
Resumo:
Obesity and diabetes are metabolic disorders associated with fatty acid availability in excess of the tissues' capacity for fatty acid oxidation. This mismatch is implicated in the pathogenesis of cardiac contractile dysfunction and also in skeletal muscle insulin resistance. My dissertation will present work to test the overall hypothesis that "western" and high fat diets differentially affect cardiac and skeletal muscle fatty acid oxidation, the expression of fatty acid responsive genes, and cardiac contractile function. Wistar rats were fed a low fat, "western," or high fat (10%, 45%, or 60% calories from fat, respectively) diet for acute (1 day to 1 week), short (4 to 8 weeks), intermediate (16 to 24 weeks), or long (32 to 48 weeks) term. With high fat diet, cardiac oleate oxidation increased at all time points investigated. In contrast, with western diet cardiac oleate oxidation increased in the acute, short and intermediate term, but not in the long term. Consistent with a maladaptation of fatty acid oxidation, cardiac power (measured ex vivo) decreased with long term western diet only. In contrast to the heart, soleus muscle oleate oxidation increased only in the acute and short term with either western or high fat feeding. Transcript analysis revealed that several fatty acid responsive genes, including pyruvate dehydrogenase kinase 4, uncoupling protein 3, mitochondrial thioesterase 1, and cytosolic thioesterase 1 increased in heart and soleus muscle to a greater extent with high fat diet, versus western diet, feeding. In conclusion, the data implicate inadequate induction of a cassette of fatty acid responsive genes in both the heart and skeletal muscle by western diet resulting in impaired activation of fatty acid oxidation, and the development of cardiac dysfunction. ^
Resumo:
Background: With over 440 million cases of infections worldwide, genital HPV is the most frequent sexually transmitted infection. There are several types including high risk types 16, 18, 58 and 70 among others, which are known to cause cervical cell abnormality and if persistent, can lead to cervical cancer which globally, claims 288,000 lives annually. 33.4 million people worldwide are currently living with HIV/AIDS, with 22.4 million in sub-Saharan Africa where 70% of the female population living with HIV/AIDS is also found. Similar risk factors for HPV, cervical cancer and HIV/AIDS include early age at sexual debut, multiple sexual partners, infrequent condom use, history of STI and immune-suppression. ^ Objectives: To describe the role of HPV in cervical cancer development, to describe the influence of HIV/AIDS on HPV and in the development of cervical cancer and to describe the importance of preventive measures such as screening. ^ Methods: This is a literature review where data were analyzed qualitatively and a descriptive narrative style used to evaluate and present the information. The data came from searches using Pub Med, Cochrane Library, EBSCO Medline databases as well as websites such as the CDC and WHO. Articles selected were published in English over the last 10 years. Keywords used included: 'HPV, cervical cancer and HIV', 'HIV and HPV', 'HPV and cervical cancer', 'HPV infection', 'HPV vaccine', 'genital HPV', 'HIV and cervical cancer', 'prevalence of HIV and cervical cancer' and 'prevalence of cervical cancer'. ^ Results: Women with HIV/AIDS have multiple HPV types, persistent infection, are more likely to present with cervical neoplasia and are at higher risk for cervical cancer. Research also shows that HIV could affect the transmissibility of HPV and that HPV itself could also increase the susceptibility to HIV acquisition. ^ Conclusion: HIV, genital HPV and cervical cancer are all preventable. Need to emphasize programs that aim to increase HIV/AIDS, HPV and cervical cancer awareness. Stress importance of behavior modification such as frequent use of condoms, decreased sexual partners and delayed first intercourse. Facilitate programs for screening and treating HPV, male circumcision, effective management of HAART and HPV vaccination.^
Resumo:
This study evaluated the administration-time-dependent effects of a stimulant (Dexedrine 5-mg), a sleep-inducer (Halcion 0.25-mg) and placebo (control) on human performance. The investigation was conducted on 12 diurnally active (0700-2300) male adults (23-38 yrs) using a double-blind, randomized sixway-crossover three-treatment, two-timepoint (0830 vs 2030) design. Performance tests were conducted hourly during sleepless 13-hour studies using a computer generated, controlled and scored multi-task cognitive performance assessment battery (PAB) developed at the Walter Reed Army Institute of Research. Specific tests were Simple and Choice Reaction Time, Serial Addition/Subtraction, Spatial Orientation, Logical Reasoning, Time Estimation, Response Timing and the Stanford Sleepiness Scale. The major index of performance was "Throughput", a combined measure of speed and accuracy.^ For the Placebo condition, Single and Group Cosinor Analysis documented circadian rhythms in cognitive performance for the majority of tests, both for individuals and for the group. Performance was best around 1830-2030 and most variable around 0530-0700 when sleepiness was greatest (0300).^ Morning Dexedrine dosing marginally enhanced performance an average of 3% with reference to the corresponding in time control level. Dexedrine AM also increased alertness by 10% over the AM control. Dexedrine PM failed to improve performance with reference to the corresponding PM control baseline. With regard to AM and PM Dexedrine administrations, AM performance was 6% better with subjects 25% more alert.^ Morning Halcion administration caused a 7% performance decrement and 16% increase in sleepiness and a 13% decrement and 10% increase in sleepiness when administered in the evening compared to corresponding in time control data. Performance was 9% worse and sleepiness 24% greater after evening versus morning Halcion administration.^ These results suggest that for evening Halcion dosing, the overnight sleep deprivation occurring in coincidence with the nadir in performance due to circadian rhythmicity together with the CNS depressant effects combine to produce performance degradation. For Dexedrine, morning administration resulted in only marginal performance enhancement; Dexedrine in the evening was less effective, suggesting the 5-mg dose level may be too low to counteract the partial sleep deprivation and nocturnal nadir in performance. ^
Resumo:
Many lines of clinical and experimental evidence indicate a viral role in carcinogenesis (1-6). Our access to patient plasma, serum, and tissue samples from invasive breast cancer (N=19), ductal carcinoma in situ (N=13), malignant ovarian cancer (N=12), and benign ovarian tumors (N=9), via IRB-approved and informed consent protocols through M.D. Anderson Cancer Center, as well as normal donor plasmas purchased from Gulf Coast Regional Blood Center (N=6), has allowed us to survey primary patient blood and tissue samples, healthy donor blood from the general population, as well as commercially available human cell lines for the presence of human endogenous retrovirus K (HERV-K) Env viral RNA (vRNA), protein, and viral particles. We hypothesize that HERV-K proteins are tumor-associated antigens and as such can be profiled and targeted in patients for diagnostic and therapeutic purposes. To test this hypothesis, we employed isopycnic ultracentrifugation, a microplate-based reverse transcriptase enzyme activity assay, reverse transcription – polymerase chain reaction (RT-PCR), cDNA sequencing, SDS-PAGE and western blotting, immunofluorescent staining, confocal microscopy, and transmission electron microscopy to evaluate v HERV-K activation in cancer. Data from large numbers of patients tested by reverse transcriptase activity assay were analyzed statistically by t-test to determine the potential use of this assay as a diagnostic tool for cancer. Significant reverse transcriptase enzyme activity was detected in 75% of ovarian cancer patients, 53.8% of ductal carcinoma in situ patient, and 42.1% of invasive breast cancer patient samples. Only 11.1% of benign ovarian patient and 16.7% of normal donor samples tested positive. HERV-K Env vRNA, or Env SU were detected in the majority of cancer types screened, as demonstrated by the results shown herein, and were largely absent in normal controls. These findings support our hypothesis that the presence of HERV-K in patient blood circulation is an indicator of cancer or pre-malignancy in vivo, that the presence of HERV-K Env on tumor cell surfaces is indicative of malignant phenotype, and that HERV-K Env is a tumor-associated antigen useful not only as a diagnostic screening tool to predict patient disease status, but also as an exploitable therapeutic target for various novel antibody-based immunotherapies.
Resumo:
Chronic lymphocytic leukemia (CLL) is the most common adult leukemia in the United Statesand Europe. CLL patients with deletion of chromosome 17p, where the tumor suppressor p53 gene is located, often develop a more aggressive disease with poor clinical outcomes. However, the underlying mechanism remains unclear. In order to understand the underneath mechanism in vivo, I have recently generated mice with Eu-TCL1-Tg:p53-/- genotype and showed that these mice develop aggressive leukemia that resembles human CLL with 17p deletion. The Eu-TCL1-Tg:p53-/- mice developed CLL disease at 3-4 months, significantly earlier than the parental Eu-TCL1-Tg mice that developed CLL disease at 8-12 months. Flow cytometry analysis showed that the CD5+/ IgM+ cell population appeared in the peritoneal cavity, bone marrow, and the spleens of Eu-TCL1-Tg:p53-/- mice significantly earlier than that of the parental Eu-TCL1-Tg mice. Massive infiltration and accumulation of leukemia cells were found in the spleen and peritoneal cavity. In vitro study showed that the leukemia cells isolated from the Eu-TCL1-Tg:p53-/- mice were more resistant to fludarabine treatment than the leukemia cells isolated from spleens of Eu-TCL1-Tg mice. Interestingly, TUNEL assay revealed that there was higher apoptotic cell death found in the Eu-TCL1-Tg spleen tissue compared to the spleens of the Eu-TCL1-Tg:p53-/- mice, suggesting that the loss of p53 compromises the apoptotic process in vivo, and this might in part explain the drug resistant phenotype of CLL cells with 17p-deletion. In the present study, we further demonstrated that the p53 deficiency in the TCL1 transgenic mice resulted in significant down-regulation of microRNAs miR-15a and miR16-1, associated with a substantial up-regulation of Mcl-1, suggesting that the p53-miR15a/16-Mcl-1 axis may play an important role in CLL pathogenesis. Interestingly, we also found that loss of p53 resulted in a significant decrease in expression of the miR-30 family especially miR-30d in leukemia lymphocytes from the Eu-TCL1-Tg:p53-/- mice. Such down-regulation of those microRNAs and up-regulation of Mcl-1 were also found in primary leukemia cells from CLL patients with 17p deletion. To further exam the biological significance of decrease in the miR-30 family in CLL, we investigated the potential involvement of EZH2 (enhancer of zeste homolog 2), a component of the Polycomb repressive complex known to be a downstream target of miR-30d and plays a role in disease progression in several solid cancers. RT-PCR and western blot analyses showed that both EZH2 mRNA transcript and protein levels were significantly increased in the lymphocytes of Eu-TCL1-Tg:p53-/- mice relative to Eu-TCL1-Tg mice. Exposure of leukemia cells isolated from Eu-TCL1-Tg:p53-/- mice to the EZH2 inhibitor 3-deazaneplanocin (DZNep) led to induction of apoptosis, suggesting EZH2 may play a role in promoting CLL cell survival and this may contribute to the aggressive phenotype of CLL with loss of p53. Our study has created a novel CLL mouse model, and suggests that the p53/miR15a/16-Mcl-1 axis & p53/miR30d-EZH2 may contribute to the aggressive phenotype and drug resistance in CLL cells with loss of p53.
Resumo:
The potential effects of the E1A gene products on the promoter activities of neu were investigated. Transcription of the neu oncogene was found to be strongly repressed by the E1A gene products and this requires that conserved region 2 of the E1A proteins. The target for E1A repression was localized within a 140 base pair (bp) DNA fragment in the upstream region of the neu promoter. To further study if this transcriptional repression of neu by E1A can inhibit the transforming ability of the neu transformed cells, the E1A gene was introduced into the neu oncogene transformed B104-1-1 cells and developed B-E1A cell lines that express E1A proteins. These B-E1A stable transfectants have reduced transforming activity compared to the parental B104-1-1 cell line and we conclude that E1A can suppress the transformed phenotypes of the neu oncogene transformed cells via transcriptional repression of neu.^ To study the effects of E1A on metastasis, we first introduced the mutation-activated rat neu oncogene into 3T3 cells and showed that both the neu oncogene transformed NIH3T3 cells and Swiss Webster 3T3 cells exhibited metastatic properties in vitro and in vivo, while their parental 3T3 cells did not. Additionally, the neu-specific monoclonal antibody 7.16.4, which can down regulate neu-encoded p185 protein, effectively reduced the metastatic properties induced by neu. To investigate if E1A can reduce the metastatic potential of neu-transformed cells, we also compared the metastatic properties of B-E1A cell lines and B104-1-1 cell. B-E1A cell lines showed reduced invasiveness and lung colonization than the parental neu transformed B104-1-1 cells. We conclude that E1A gene products also have inhibitory effect on the metastatic phenotypes of the neu oncogene transformed cells.^ The product of human retinoblastoma (RB) susceptibility gene has been shown to complex with E1A gene products and is speculated to regulate gene expression. We therefore investigated in E1A-RB interaction might be involved in the regulation of neu oncogene expression. We found that the RB gene product can decrease the E1A-mediated repression of neu oncogene and the E1A binding region of the RB protein is required for the derepression function. ^
Resumo:
El virus del Papiloma Humano infecta de manera selectiva al epitelio de la piel y las mucosas. Cuando se producen las infecciones, éstas pueden ser asintomáticas, provocando lesiones de tipos verrugosos o asociados a diversas neoplasias, benignos o malignos del tracto respiratorio superior y la cavidad bucal principalmente. Se presenta el caso de una niña con lesiones orales producidas por el VPH. Las lesiones se manifiestan clínicamente: elevadas, pediculadas y de superficie papilar; otras son planas y difusas sobre una base sésil.
Resumo:
The Project you are about to see it is based on the technologies used on object detection and recognition, especially on leaves and chromosomes. To do so, this document contains the typical parts of a scientific paper, as it is what it is. It is composed by an Abstract, an Introduction, points that have to do with the investigation area, future work, conclusions and references used for the elaboration of the document. The Abstract talks about what are we going to find in this paper, which is technologies employed on pattern detection and recognition for leaves and chromosomes and the jobs that are already made for cataloguing these objects. In the introduction detection and recognition meanings are explained. This is necessary as many papers get confused with these terms, specially the ones talking about chromosomes. Detecting an object is gathering the parts of the image that are useful and eliminating the useless parts. Summarizing, detection would be recognizing the objects borders. When talking about recognition, we are talking about the computers or the machines process, which says what kind of object we are handling. Afterwards we face a compilation of the most used technologies in object detection in general. There are two main groups on this category: Based on derivatives of images and based on ASIFT points. The ones that are based on derivatives of images have in common that convolving them with a previously created matrix does the treatment of them. This is done for detecting borders on the images, which are changes on the intensity of the pixels. Within these technologies we face two groups: Gradian based, which search for maximums and minimums on the pixels intensity as they only use the first derivative. The Laplacian based methods search for zeros on the pixels intensity as they use the second derivative. Depending on the level of details that we want to use on the final result, we will choose one option or the other, because, as its logic, if we used Gradian based methods, the computer will consume less resources and less time as there are less operations, but the quality will be worse. On the other hand, if we use the Laplacian based methods we will need more time and resources as they require more operations, but we will have a much better quality result. After explaining all the derivative based methods, we take a look on the different algorithms that are available for both groups. The other big group of technologies for object recognition is the one based on ASIFT points, which are based on 6 image parameters and compare them with another image taking under consideration these parameters. These methods disadvantage, for our future purposes, is that it is only valid for one single object. So if we are going to recognize two different leaves, even though if they refer to the same specie, we are not going to be able to recognize them with this method. It is important to mention these types of technologies as we are talking about recognition methods in general. At the end of the chapter we can see a comparison with pros and cons of all technologies that are employed. Firstly comparing them separately and then comparing them all together, based on our purposes. Recognition techniques, which are the next chapter, are not really vast as, even though there are general steps for doing object recognition, every single object that has to be recognized has its own method as the are different. This is why there is not a general method that we can specify on this chapter. We now move on into leaf detection techniques on computers. Now we will use the technique explained above based on the image derivatives. Next step will be to turn the leaf into several parameters. Depending on the document that you are referring to, there will be more or less parameters. Some papers recommend to divide the leaf into 3 main features (shape, dent and vein] and doing mathematical operations with them we can get up to 16 secondary features. Next proposition is dividing the leaf into 5 main features (Diameter, physiological length, physiological width, area and perimeter] and from those, extract 12 secondary features. This second alternative is the most used so it is the one that is going to be the reference. Following in to leaf recognition, we are based on a paper that provides a source code that, clicking on both leaf ends, it automatically tells to which specie belongs the leaf that we are trying to recognize. To do so, it only requires having a database. On the tests that have been made by the document, they assure us a 90.312% of accuracy over 320 total tests (32 plants on the database and 10 tests per specie]. Next chapter talks about chromosome detection, where we shall pass the metaphasis plate, where the chromosomes are disorganized, into the karyotype plate, which is the usual view of the 23 chromosomes ordered by number. There are two types of techniques to do this step: the skeletonization process and swiping angles. Skeletonization progress consists on suppressing the inside pixels of the chromosome to just stay with the silhouette. This method is really similar to the ones based on the derivatives of the image but the difference is that it doesnt detect the borders but the interior of the chromosome. Second technique consists of swiping angles from the beginning of the chromosome and, taking under consideration, that on a single chromosome we cannot have more than an X angle, it detects the various regions of the chromosomes. Once the karyotype plate is defined, we continue with chromosome recognition. To do so, there is a technique based on the banding that chromosomes have (grey scale bands] that make them unique. The program then detects the longitudinal axis of the chromosome and reconstructs the band profiles. Then the computer is able to recognize this chromosome. Concerning the future work, we generally have to independent techniques that dont reunite detection and recognition, so our main focus would be to prepare a program that gathers both techniques. On the leaf matter we have seen that, detection and recognition, have a link as both share the option of dividing the leaf into 5 main features. The work that would have to be done is to create an algorithm that linked both methods, as in the program, which recognizes leaves, it has to be clicked both leaf ends so it is not an automatic algorithm. On the chromosome side, we should create an algorithm that searches for the beginning of the chromosome and then start to swipe angles, to later give the parameters to the program that searches for the band profiles. Finally, on the summary, we explain why this type of investigation is needed, and that is because with global warming, lots of species (animals and plants] are beginning to extinguish. That is the reason why a big database, which gathers all the possible species, is needed. For recognizing animal species, we just only have to have the 23 chromosomes. While recognizing a plant, there are several ways of doing it, but the easiest way to input a computer is to scan the leaf of the plant. RESUMEN. El proyecto que se puede ver a continuación trata sobre las tecnologías empleadas en la detección y reconocimiento de objetos, especialmente de hojas y cromosomas. Para ello, este documento contiene las partes típicas de un paper de investigación, puesto que es de lo que se trata. Así, estará compuesto de Abstract, Introducción, diversos puntos que tengan que ver con el área a investigar, trabajo futuro, conclusiones y biografía utilizada para la realización del documento. Así, el Abstract nos cuenta qué vamos a poder encontrar en este paper, que no es ni más ni menos que las tecnologías empleadas en el reconocimiento y detección de patrones en hojas y cromosomas y qué trabajos hay existentes para catalogar a estos objetos. En la introducción se explican los conceptos de qué es la detección y qué es el reconocimiento. Esto es necesario ya que muchos papers científicos, especialmente los que hablan de cromosomas, confunden estos dos términos que no podían ser más sencillos. Por un lado tendríamos la detección del objeto, que sería simplemente coger las partes que nos interesasen de la imagen y eliminar aquellas partes que no nos fueran útiles para un futuro. Resumiendo, sería reconocer los bordes del objeto de estudio. Cuando hablamos de reconocimiento, estamos refiriéndonos al proceso que tiene el ordenador, o la máquina, para decir qué clase de objeto estamos tratando. Seguidamente nos encontramos con un recopilatorio de las tecnologías más utilizadas para la detección de objetos, en general. Aquí nos encontraríamos con dos grandes grupos de tecnologías: Las basadas en las derivadas de imágenes y las basadas en los puntos ASIFT. El grupo de tecnologías basadas en derivadas de imágenes tienen en común que hay que tratar a las imágenes mediante una convolución con una matriz creada previamente. Esto se hace para detectar bordes en las imágenes que son básicamente cambios en la intensidad de los píxeles. Dentro de estas tecnologías nos encontramos con dos grupos: Los basados en gradientes, los cuales buscan máximos y mínimos de intensidad en la imagen puesto que sólo utilizan la primera derivada; y los Laplacianos, los cuales buscan ceros en la intensidad de los píxeles puesto que estos utilizan la segunda derivada de la imagen. Dependiendo del nivel de detalles que queramos utilizar en el resultado final nos decantaremos por un método u otro puesto que, como es lógico, si utilizamos los basados en el gradiente habrá menos operaciones por lo que consumirá más tiempo y recursos pero por la contra tendremos menos calidad de imagen. Y al revés pasa con los Laplacianos, puesto que necesitan más operaciones y recursos pero tendrán un resultado final con mejor calidad. Después de explicar los tipos de operadores que hay, se hace un recorrido explicando los distintos tipos de algoritmos que hay en cada uno de los grupos. El otro gran grupo de tecnologías para el reconocimiento de objetos son los basados en puntos ASIFT, los cuales se basan en 6 parámetros de la imagen y la comparan con otra imagen teniendo en cuenta dichos parámetros. La desventaja de este método, para nuestros propósitos futuros, es que sólo es valido para un objeto en concreto. Por lo que si vamos a reconocer dos hojas diferentes, aunque sean de la misma especie, no vamos a poder reconocerlas mediante este método. Aún así es importante explicar este tipo de tecnologías puesto que estamos hablando de técnicas de reconocimiento en general. Al final del capítulo podremos ver una comparación con los pros y las contras de todas las tecnologías empleadas. Primeramente comparándolas de forma separada y, finalmente, compararemos todos los métodos existentes en base a nuestros propósitos. Las técnicas de reconocimiento, el siguiente apartado, no es muy extenso puesto que, aunque haya pasos generales para el reconocimiento de objetos, cada objeto a reconocer es distinto por lo que no hay un método específico que se pueda generalizar. Pasamos ahora a las técnicas de detección de hojas mediante ordenador. Aquí usaremos la técnica explicada previamente explicada basada en las derivadas de las imágenes. La continuación de este paso sería diseccionar la hoja en diversos parámetros. Dependiendo de la fuente a la que se consulte pueden haber más o menos parámetros. Unos documentos aconsejan dividir la morfología de la hoja en 3 parámetros principales (Forma, Dentina y ramificación] y derivando de dichos parámetros convertirlos a 16 parámetros secundarios. La otra propuesta es dividir la morfología de la hoja en 5 parámetros principales (Diámetro, longitud fisiológica, anchura fisiológica, área y perímetro] y de ahí extraer 12 parámetros secundarios. Esta segunda propuesta es la más utilizada de todas por lo que es la que se utilizará. Pasamos al reconocimiento de hojas, en la cual nos hemos basado en un documento que provee un código fuente que cucando en los dos extremos de la hoja automáticamente nos dice a qué especie pertenece la hoja que estamos intentando reconocer. Para ello sólo hay que formar una base de datos. En los test realizados por el citado documento, nos aseguran que tiene un índice de acierto del 90.312% en 320 test en total (32 plantas insertadas en la base de datos por 10 test que se han realizado por cada una de las especies]. El siguiente apartado trata de la detección de cromosomas, en el cual se debe de pasar de la célula metafásica, donde los cromosomas están desorganizados, al cariotipo, que es como solemos ver los 23 cromosomas de forma ordenada. Hay dos tipos de técnicas para realizar este paso: Por el proceso de esquelotonización y barriendo ángulos. El proceso de esqueletonización consiste en eliminar los píxeles del interior del cromosoma para quedarse con su silueta; Este proceso es similar a los métodos de derivación de los píxeles pero se diferencia en que no detecta bordes si no que detecta el interior de los cromosomas. La segunda técnica consiste en ir barriendo ángulos desde el principio del cromosoma y teniendo en cuenta que un cromosoma no puede doblarse más de X grados detecta las diversas regiones de los cromosomas. Una vez tengamos el cariotipo, se continua con el reconocimiento de cromosomas. Para ello existe una técnica basada en las bandas de blancos y negros que tienen los cromosomas y que son las que los hacen únicos. Para ello el programa detecta los ejes longitudinales del cromosoma y reconstruye los perfiles de las bandas que posee el cromosoma y que lo identifican como único. En cuanto al trabajo que se podría desempeñar en el futuro, tenemos por lo general dos técnicas independientes que no unen la detección con el reconocimiento por lo que se habría de preparar un programa que uniese estas dos técnicas. Respecto a las hojas hemos visto que ambos métodos, detección y reconocimiento, están vinculados debido a que ambos comparten la opinión de dividir las hojas en 5 parámetros principales. El trabajo que habría que realizar sería el de crear un algoritmo que conectase a ambos ya que en el programa de reconocimiento se debe clicar a los dos extremos de la hoja por lo que no es una tarea automática. En cuanto a los cromosomas, se debería de crear un algoritmo que busque el inicio del cromosoma y entonces empiece a barrer ángulos para después poder dárselo al programa que busca los perfiles de bandas de los cromosomas. Finalmente, en el resumen se explica el por qué hace falta este tipo de investigación, esto es que con el calentamiento global, muchas de las especies (tanto animales como plantas] se están empezando a extinguir. Es por ello que se necesitará una base de datos que contemple todas las posibles especies tanto del reino animal como del reino vegetal. Para reconocer a una especie animal, simplemente bastará con tener sus 23 cromosomas; mientras que para reconocer a una especie vegetal, existen diversas formas. Aunque la más sencilla de todas es contar con la hoja de la especie puesto que es el elemento más fácil de escanear e introducir en el ordenador.
Resumo:
Combining transcranial magnetic stimulation (TMS) and electroencephalography (EEG) constitutes a powerful tool to directly assess human cortical excitability and connectivity. TMS of the primary motor cortex elicits a sequence of TMS-evoked EEG potentials (TEPs). It is thought that inhibitory neurotransmission through GABA-A receptors (GABAAR) modulates early TEPs (<50 ms after TMS), whereas GABA-B receptors (GABABR) play a role for later TEPs (at ∼100 ms after TMS). However, the physiological underpinnings of TEPs have not been clearly elucidated yet. Here, we studied the role of GABAA/B-ergic neurotransmission for TEPs in healthy subjects using a pharmaco-TMS-EEG approach. In Experiment 1, we tested the effects of a single oral dose of alprazolam (a classical benzodiazepine acting as allosteric-positive modulator at α1, α2, α3, and α5 subunit-containing GABAARs) and zolpidem (a positive modulator mainly at the α1 GABAAR) in a double-blind, placebo-controlled, crossover study. In Experiment 2, we tested the influence of baclofen (a GABABR agonist) and diazepam (a classical benzodiazepine) versus placebo on TEPs. Alprazolam and diazepam increased the amplitude of the negative potential at 45 ms after stimulation (N45) and decreased the negative component at 100 ms (N100), whereas zolpidem increased the N45 only. In contrast, baclofen specifically increased the N100 amplitude. These results provide strong evidence that the N45 represents activity of α1-subunit-containing GABAARs, whereas the N100 represents activity of GABABRs. Findings open a novel window of opportunity to study alteration of GABAA-/GABAB-related inhibition in disorders, such as epilepsy or schizophrenia.
Resumo:
The stabilizing effect of grouping rotor blades in pairs has been assessed both, numerically and experimentally. The bending and torsion modes of a low aspect ratio high speed turbine cascade tested in the non-rotating test facility at EPFL (Ecole Polytechnique Fédérale de Lausanne) have been chosen as the case study. The controlled vibration of 20 blades in travelling wave form was performed by means of an electromagnetic excitation system, enabling the adjustement of the vibration amplitude and inter blade phase at a given frequency. Unsteady pressure transducers located along the blade mid-section were used to obtain the modulus and phase of the unsteady pressure caused by the airfoil motion. The stabilizing effect of the torsion mode was clearly observed both in the experiments and the simulations, however the effect of grouping the blades in pairs in the minimum damping at the tested frequency was marginal in the bending mode. A numerical tool was validated using the available experimental data and then used to extend the results at lower and more relevant reduced frequencies. It is shown that the stabilizing effect exists for the bending and torsion modes in the frequency range typical of low-pressure turbines. It is concluded that the stabilizing effect of this configuration is due to the shielding effect of the pressure side of the airfoil that defines the passage of the pair on the suction side of the same passage, since the relative motion between both is null. This effect is observed both in the experiments and simulations.
Resumo:
This document is a summary of the Bachelor thesis titled “VHDL-Based System Design of a Cognitive Sensorimotor Loop (CSL) for Haptic Human-Machine Interaction (HMI)” written by Pablo de Miguel Morales, Electronics Engineering student at the Universidad Politécnica de Madrid (UPM Madrid, Spain) during an Erasmus+ Exchange Program at the Beuth Hochschule für Technik (BHT Berlin, Germany). The tutor of this project is Dr. Prof. Hild. This project has been developed inside the Neurobotics Research Laboratory (NRL) in close collaboration with Benjamin Panreck, a member of the NRL, and another exchange student from the UPM Pablo Gabriel Lezcano. For a deeper comprehension of the content of the thesis, a deeper look in the document is needed as well as the viewing of the videos and the VHDL design. In the growing field of automation, a large amount of workforce is dedicated to improve, adapt and design motor controllers for a wide variety of applications. In the specific field of robotics or other machinery designed to interact with humans or their environment, new needs and technological solutions are often being discovered due to the existing, relatively unexplored new scenario it is. The project consisted of three main parts: Two VHDL-based systems and one short experiment on the haptic perception. Both VHDL systems are based on a Cognitive Sensorimotor Loop (CSL) which is a control loop designed by the NRL and mainly developed by Dr. Prof. Hild. The CSL is a control loop whose main characteristic is the fact that it does not use any external sensor to measure the speed or position of the motor but the motor itself. The motor always generates a voltage that is proportional to its angular speed so it does not need calibration. This method is energy efficient and simplifies control loops in complex systems. The first system, named CSL Stay In Touch (SIT), consists in a one DC motor system controller by a FPGA Board (Zynq ZYBO 7000) whose aim is to keep contact with any external object that touches its Sensing Platform in both directions. Apart from the main behavior, three features (Search Mode, Inertia Mode and Return Mode) have been designed to enhance the haptic interaction experience. Additionally, a VGA-Screen is also controlled by the FPGA Board for the monitoring of the whole system. This system has been completely developed, tested and improved; analyzing its timing and consumption properties. The second system, named CSL Fingerlike Mechanism (FM), consists in a fingerlike mechanical system controlled by two DC motors (Each controlling one part of the finger). The behavior is similar to the first system but in a more complex structure. This system was optional and not part of the original objectives of the thesis and it could not be properly finished and tested due to the lack of time. The haptic perception experiment was an experiment conducted to have an insight into the complexity of human haptic perception in order to implement this knowledge into technological applications. The experiment consisted in testing the capability of the subjects to recognize different objects and shapes while being blindfolded and with their ears covered. Two groups were done, one had full haptic perception while the other had to explore the environment with a plastic piece attached to their finger to create a haptic handicap. The conclusion of the thesis was that a haptic system based only on a CSL-based system is not enough to retrieve valuable information from the environment and that other sensors are needed (temperature, pressure, etc.) but that a CSL-based system is very useful to control the force applied by the system to interact with haptic sensible surfaces such as skin or tactile screens. RESUMEN. Este documento es un resumen del proyecto fin de grado titulado “VHDL-Based System Design of a Cognitive Sensorimotor Loop (CSL) for Haptic Human-Machine Interaction (HMI)” escrito por Pablo de Miguel, estudiante de Ingeniería Electrónica de Comunicaciones en la Universidad Politécnica de Madrid (UPM Madrid, España) durante un programa de intercambio Erasmus+ en la Beuth Hochschule für Technik (BHT Berlin, Alemania). El tutor de este proyecto ha sido Dr. Prof. Hild. Este proyecto se ha desarrollado dentro del Neurorobotics Research Laboratory (NRL) en estrecha colaboración con Benjamin Panreck (un miembro del NRL) y con Pablo Lezcano (Otro estudiante de intercambio de la UPM). Para una comprensión completa del trabajo es necesaria una lectura detenida de todo el documento y el visionado de los videos y análisis del diseño VHDL incluidos en el CD adjunto. En el creciente sector de la automatización, una gran cantidad de esfuerzo está dedicada a mejorar, adaptar y diseñar controladores de motor para un gran rango de aplicaciones. En el campo específico de la robótica u otra maquinaria diseñada para interactuar con los humanos o con su entorno, nuevas necesidades y soluciones tecnológicas se siguen desarrollado debido al relativamente inexplorado y nuevo escenario que supone. El proyecto consta de tres partes principales: Dos sistemas basados en VHDL y un pequeño experimento sobre la percepción háptica. Ambos sistemas VHDL están basados en el Cognitive Sesnorimotor Loop (CSL) que es un lazo de control creado por el NRL y cuyo desarrollador principal ha sido Dr. Prof. Hild. El CSL es un lazo de control cuya principal característica es la ausencia de sensores externos para medir la velocidad o la posición del motor, usando el propio motor como sensor. El motor siempre genera un voltaje proporcional a su velocidad angular de modo que no es necesaria calibración. Este método es eficiente en términos energéticos y simplifica los lazos de control en sistemas complejos. El primer sistema, llamado CSL Stay In Touch (SIT), consiste en un sistema formado por un motor DC controlado por una FPGA Board (Zynq ZYBO 7000) cuyo objetivo es mantener contacto con cualquier objeto externo que toque su plataforma sensible en ambas direcciones. Aparte del funcionamiento básico, tres modos (Search Mode, Inertia Mode y Return Mode) han sido diseñados para mejorar la interacción. Adicionalmente, se ha diseñado el control a través de la FPGA Board de una pantalla VGA para la monitorización de todo el sistema. El sistema ha sido totalmente desarrollado, testeado y mejorado; analizando su propiedades de timing y consumo energético. El segundo sistema, llamado CSL Fingerlike Mechanism (FM), consiste en un mecanismo similar a un dedo controlado por dos motores DC (Cada uno controlando una falange). Su comportamiento es similar al del primer sistema pero con una estructura más compleja. Este sistema no formaba parte de los objetivos iniciales del proyecto y por lo tanto era opcional. No pudo ser plenamente desarrollado debido a la falta de tiempo. El experimento de percepción háptica fue diseñado para profundizar en la percepción háptica humana con el objetivo de aplicar este conocimiento en aplicaciones tecnológicas. El experimento consistía en testear la capacidad de los sujetos para reconocer diferentes objetos, formas y texturas en condiciones de privación del sentido del oído y la vista. Se crearon dos grupos, en uno los sujetos tenían plena percepción háptica mientras que en el otro debían interactuar con los objetos a través de una pieza de plástico para generar un hándicap háptico. La conclusión del proyecto fue que un sistema háptico basado solo en sistemas CSL no es suficiente para recopilar información valiosa del entorno y que debe hacer uso de otros sensores (temperatura, presión, etc.). En cambio, un sistema basado en CSL es idóneo para el control de la fuerza aplicada por el sistema durante la interacción con superficies hápticas sensibles tales como la piel o pantallas táctiles.