969 resultados para metric access methods
Resumo:
This study evaluated histopathologically different methods of experimental induction of periapical periodontitis. The radiographic and microbiological evaluations have been performed in a previous investigation. Fifty-seven root canals from dogs' teeth were assigned to 4 groups. In GI (n=14) and GII (n=14), the root canals were exposed to oral environment for 180 days; in GIII (n=14) and GIV (n=15) the root canals were exposed for 7 days and then the access cavities were restored and remained sealed for 53 days. The root apices of GI and GIII were perforated, whilst those of GII and GIV remained intact. After induction of periapical periodontitis, the dogs were euthanized. Serial sections were obtained and stained with hematoxylin and eosin. Data of the histopathological evaluation were submitted to Kruskal-Wallis and Dunn's tests at 5% significance level. The inflammatory periapical reaction and resorption of mineralized tissues were less intense in GII than in the other groups (p<0.05). There was no histopathological difference among the experimentally induced periapical lesions in the teeth with coronal sealing. On the other hand, when coronal sealing was not performed, greater intensity of induced periapical periodontitis was observed in the teeth with apical perforation.
Resumo:
Métodos quimiométricos (estatísticos) são empregados para classificar um conjunto de compostos derivados de neolignanas com atividade biológica contra a Paracoccidioides brasiliensis. O método AM1 (Austin Model 1) foi utilizado para calcular um conjunto de descritores moleculares (propriedades) para os compostos em estudo. A seguir, os descritores foram analisados utilizando os seguintes métodos de reconhecimento de padrões: Análise de Componentes Principais (PCA), Análise Hierárquica de Agrupamentos (HCA) e o método de K-vizinhos mais próximos (KNN). Os métodos PCA e HCA mostraram-se bastante eficientes para classificação dos compostos estudados em dois grupos (ativos e inativos). Três descritores moleculares foram responsáveis pela separação entre os compostos ativos e inativos: energia do orbital molecular mais alto ocupado (EHOMO), ordem de ligação entre os átomos C1'-R7 (L14) e ordem de ligação entre os átomos C5'-R6 (L22). Como as variáveis responsáveis pela separação entre compostos ativos e inativos são descritores eletrônicos, conclui-se que efeitos eletrônicos podem desempenhar um importante papel na interação entre receptor biológico e compostos derivados de neolignanas com atividade contra a Paracoccidioides brasiliensis.
Resumo:
Metric features and modular and laminar distributions of intrinsic projections of area 17 were studied in Cebus apella. Anterogradely and retrogradely labeled cell appendages were obtained using both saturated pellets and iontophoretic injections of biocytin into the operculum. Laminar and modular distributions of the labeled processes were analyzed using Nissl counterstaining, and/or cytochrome oxidase and/or NADPH-diaphorase histochemistry. We distinguished three labeled cell types: pyramidal, star pyramidal and stellate cells located in supragranular cortical layers (principally in layers IIIa, IIIb α, IIIb ß and IIIc). Three distinct axon terminal morphologies were found, i.e., Ia, Ib and II located in granular and supragranular layers. Both complete and partial segregation of group I axon terminals relative to the limits of the blobs of V1 were found. The results are compatible with recent evidence of incomplete segregation of visual information flow in V1 of Old and New World primates.
Resumo:
O método de empilhamento sísmico por Superfície de Reflexão Comum (ou empilhamento SRC) produz a simulação de seções com afastamento nulo (NA) a partir dos dados de cobertura múltipla. Para meios 2D, o operador de empilhamento SRC depende de três parâmetros que são: o ângulo de emergência do raio central com fonte-receptor nulo (β0), o raio de curvatura da onda ponto de incidência normal (RNIP) e o raio de curvatura da onda normal (RN). O problema crucial para a implementação do método de empilhamento SRC consiste na determinação, a partir dos dados sísmicos, dos três parâmetros ótimos associados a cada ponto de amostragem da seção AN a ser simulada. No presente trabalho foi desenvolvido uma nova sequência de processamento para a simulação de seções AN por meio do método de empilhamento SRC. Neste novo algoritmo, a determinação dos três parâmetros ótimos que definem o operador de empilhamento SRC é realizada em três etapas: na primeira etapa são estimados dois parâmetros (β°0 e R°NIP) por meio de uma busca global bidimensional nos dados de cobertura múltipla. Na segunda etapa é usado o valor de β°0 estimado para determinar-se o terceiro parâmetro (R°N) através de uma busca global unidimensional na seção AN resultante da primeira etapa. Em ambas etapas as buscas globais são realizadas aplicando o método de otimização Simulated Annealing (SA). Na terceira etapa são determinados os três parâmetros finais (β0, RNIP e RN) através uma busca local tridimensional aplicando o método de otimização Variable Metric (VM) nos dados de cobertura múltipla. Nesta última etapa é usado o trio de parâmetros (β°0, R°NIP, R°N) estimado nas duas etapas anteriores como aproximação inicial. Com o propósito de simular corretamente os eventos com mergulhos conflitantes, este novo algoritmo prevê a determinação de dois trios de parâmetros associados a pontos de amostragem da seção AN onde há intersecção de eventos. Em outras palavras, nos pontos da seção AN onde dois eventos sísmicos se cruzam são determinados dois trios de parâmetros SRC, os quais serão usados conjuntamente na simulação dos eventos com mergulhos conflitantes. Para avaliar a precisão e eficiência do novo algoritmo, este foi aplicado em dados sintéticos de dois modelos: um com interfaces contínuas e outro com uma interface descontinua. As seções AN simuladas têm elevada razão sinal-ruído e mostram uma clara definição dos eventos refletidos e difratados. A comparação das seções AN simuladas com as suas similares obtidas por modelamento direto mostra uma correta simulação de reflexões e difrações. Além disso, a comparação dos valores dos três parâmetros otimizados com os seus correspondentes valores exatos calculados por modelamento direto revela também um alto grau de precisão. Usando a aproximação hiperbólica dos tempos de trânsito, porém sob a condição de RNIP = RN, foi desenvolvido um novo algoritmo para a simulação de seções AN contendo predominantemente campos de ondas difratados. De forma similar ao algoritmo de empilhamento SRC, este algoritmo denominado empilhamento por Superfícies de Difração Comum (SDC) também usa os métodos de otimização SA e VM para determinar a dupla de parâmetros ótimos (β0, RNIP) que definem o melhor operador de empilhamento SDC. Na primeira etapa utiliza-se o método de otimização SA para determinar os parâmetros iniciais β°0 e R°NIP usando o operador de empilhamento com grande abertura. Na segunda etapa, usando os valores estimados de β°0 e R°NIP, são melhorados as estimativas do parâmetro RNIP por meio da aplicação do algoritmo VM na seção AN resultante da primeira etapa. Na terceira etapa são determinados os melhores valores de β°0 e R°NIP por meio da aplicação do algoritmo VM nos dados de cobertura múltipla. Vale salientar que a aparente repetição de processos tem como efeito a atenuação progressiva dos eventos refletidos. A aplicação do algoritmo de empilhamento SDC em dados sintéticos contendo campos de ondas refletidos e difratados, produz como resultado principal uma seção AN simulada contendo eventos difratados claramente definidos. Como uma aplicação direta deste resultado na interpretação de dados sísmicos, a migração pós-empilhamento em profundidade da seção AN simulada produz uma seção com a localização correta dos pontos difratores associados às descontinuidades do modelo.
Resumo:
We used psychophysical tests to evaluate spatial vision in 15 subjects with a clinical history of chronic alcoholism by measuring luminance contrast sensitivity and color discrimination. The subjects were initially subjected to clinical inquiry and ophthalmological exam. Subjects then performed psychophysical tests to measure spatial contrast thresholds using sine wave gratings of different spatial frequencies and contrasts and chromatic discrimination thresholds using the Mollon-Reffin test. For the analysis, subjects were divided into three groups according to age and compared with age-matched controls. Ten subjects had some degree of color vision loss, which was quite severe in seven cases. All subjects had normal luminance contrast sensitivity. The results suggest that color vision changes related to chronic alcoholism can occur in the absence of impairment of spatial luminance contrast sensitivity and thus is an important aspect to be considered in the clinical evaluation of this condition.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
This mixed methods concurrent triangulation design study was predicated upon two models that advocated a connection between teaching presence and perceived learning: the Community of Inquiry Model of Online Learning developed by Garrison, Anderson, and Archer (2000); and the Online Interaction Learning Model by Benbunan-Fich, Hiltz, and Harasim (2005). The objective was to learn how teaching presence impacted students’ perceptions of learning and sense of community in intensive online distance education courses developed and taught by instructors at a regional comprehensive university. In the quantitative phase online surveys collected relevant data from participating students (N = 397) and selected instructional faculty (N = 32) during the second week of a three-week Winter Term. Student information included: demographics such as age, gender, employment status, and distance from campus; perceptions of teaching presence; sense of community; perceived learning; course length; and course type. The students claimed having positive relationships between teaching presence, perceived learning, and sense of community. The instructors showed similar positive relationships with no significant differences when the student and instructor data were compared. The qualitative phase consisted of interviews with 12 instructors who had completed the online survey and replied to all of the open-response questions. The two phases were integrated using a matrix generation, and the analysis allowed for conclusions regarding teaching presence, perceived learning, and sense of community. The findings were equivocal with regard to satisfaction with course length and the relative importance of the teaching presence components. A model was provided depicting relationships between and among teaching presence components, perceived learning, and sense of community in intensive online courses.
Resumo:
The decreasing number of women who are graduating in the Science, Technology, Engineering and Mathematics (STEM) fields continues to be a major concern. Despite national support in the form of grants provided by National Science Foundation, National Center for Information and Technology and legislation passed such as the Deficit Reduction Act of 2005 that encourages women to enter the STEM fields, the number of women actually graduating in these fields is surprisingly low. This research study focuses on a robotics competition and its ability to engage female adolescents in STEM curricula. Data have been collected to help explain why young women are reticent to take technology or engineering type courses in high school and college. Factors that have been described include attitudes, parental support, social aspects, peer pressure, and lack of role models. Often these courses were thought to have masculine and “nerdy” overtones. The courses were usually majority male enrollments and appeared to be very competitive. With more female adolescents engaging in this type of competitive atmosphere, this study gathered information to discover what about the competition appealed to these young women. Focus groups were used to gather information from adolescent females who were participating in the First Lego League (FLL) and CEENBoT competitions. What enticed them to participate in a curriculum that data demonstrated many of their peers avoided? FLL and CEENBoT are robotics programs based on curricula that are taught in afterschool programs in non-formal environments. These programs culminate in a very large robotics competition. My research questions included: What are the factors that encouraged participants to participate in the robotics competition? What was the original enticement to the FLL and CEENBoT programs? What will make participants want to come back and what are the participants’ plans for the future? My research mirrored data of previous findings such as lack of role models, the need for parental support, social stigmatisms and peer pressure are still major factors that determine whether adolescent females seek out STEM activities. An interesting finding, which was an exception to previous findings, was these female adolescents enjoyed the challenge of the competition. The informal learning environments encouraged an atmosphere of social engagement and cooperative learning. Many volunteers that led the afterschool programs were women (role models) and a majority of parents showed support by accommodating an afterschool situation. The young women that were engaged in the competition noted it was a friendly competition, but they were all there to win. All who participated in the competition had a similar learning environment: competitive but cooperative. Further research is needed to determine if it is the learning environment that lures adolescent females to the program and entices them to continue in the STEM fields or if it is the competitive aspect of the culminating activity. Advisors: James King and Allen Steckelberg
Resumo:
Purpose: There is no consensus on the optimal method to measure delivered dialysis dose in patients with acute kidney injury (AKI). The use of direct dialysate-side quantification of dose in preference to the use of formal blood-based urea kinetic modeling and simplified blood urea nitrogen (BUN) methods has been recommended for dose assessment in critically-ill patients with AKI. We evaluate six different blood-side and dialysate-side methods for dose quantification. Methods: We examined data from 52 critically-ill patients with AKI requiring dialysis. All patients were treated with pre-dilution CWHDF and regional citrate anticoagulation. Delivered dose was calculated using blood-side and dialysis-side kinetics. Filter function was assessed during the entire course of therapy by calculating BUN to dialysis fluid urea nitrogen (FUN) ratios q/12 hours. Results: Median daily treatment time was 1,413 min (1,260-1,440). The median observed effluent volume per treatment was 2,355 mL/h (2,060-2,863) (p<0.001). Urea mass removal rate was 13.0 +/- 7.6 mg/min. Both EKR (r(2)=0.250; p<0.001) and K-D (r(2)=0.409; p<0.001) showed a good correlation with actual solute removal. EKR and K-D presented a decline in their values that was related to the decrease in filter function assessed by the FUN/BUN ratio. Conclusions: Effluent rate (ml/kg/h) can only empirically provide an estimated of dose in CRRT. For clinical practice, we recommend that the delivered dose should be measured and expressed as K-D. EKR also constitutes a good method for dose comparisons over time and across modalities.
Resumo:
This study aimed to verify the influence of the transport in open or closed compartments (0 h), followed by two resting periods (1 and 3 h) for the slaughter process on the levels of cortisol as a indicative of stress level. At the slaughterhouse, blood samples were taken from 86 lambs after the transport and before slaughter for plasma cortisol analysis. The method of transport influenced in the cortisol concentration (0 h; P < 0.01). The animals transported in the closed compartment had a lower level (28.97 ng ml(-1)) than the animals transported in the open compartment (35.49 ng ml(-1)). After the resting period in the slaughterhouse. there was a decline in the plasmatic cortisol concentration, with the animals subjected to 3 h of rest presenting the lower average cortisol value (24.14 ng ml(-1); P < 0.05) than animals subjected to 1 h of rest (29.95 ng ml(-1)). It can be inferred that the lambs that remained 3 h in standby before slaughter had more time to recover from the stress of the transportation than those that waited just 1 h. Visual access to the external environment during the transport of the lambs is a stressful factor changing the level of plasmatic cortisol, and the resting period before slaughter was effective in lowering stress, reducing the plasmatic cortisol in the lambs. (c) 2012 Elsevier B.V. All rights reserved.
Resumo:
Objectives To analyse the perspective of clinical research stakeholders concerning post-trial access to study medication. Methods Questionnaires and informed consents were sent through e-mail to 599 ethics committee (EC) members, 290 clinical investigators (HIV/AIDS and Diabetes) and 53 sponsors in Brazil. Investigators were also asked to submit the questionnaire to their research patients. Two reminders were sent to participants. Results The response rate was 21%, 20% and 45% in EC, investigators and sponsors' groups, respectively. 54 patients answered the questionnaire through their doctors. The least informative item in the consent form was how to obtain the study medication after trial. If a benefit were demonstrated in the study, 60% of research participants and 35% of EC answered that all patients should continue receiving study medication after trial; 43% of investigators believed the medication should be given to participants, and 40% to subjects who participated and benefited from treatment. For 50% of the sponsors, study medication should be assured to participants who had benefited from treatment. The majority of responders answered that medication should be provided free by sponsors; investigators and sponsors believed the medication should be kept until available in the public health sector; EC members said that the patient should keep the benefit; patients answered that benefits should be assured for life. Conclusions Due to the study limitations, the results cannot be generalised; however, the data can contribute to discussion of this complex topic through analysing the views of stakeholders in clinical research in Brazil.
Resumo:
Current scientific applications have been producing large amounts of data. The processing, handling and analysis of such data require large-scale computing infrastructures such as clusters and grids. In this area, studies aim at improving the performance of data-intensive applications by optimizing data accesses. In order to achieve this goal, distributed storage systems have been considering techniques of data replication, migration, distribution, and access parallelism. However, the main drawback of those studies is that they do not take into account application behavior to perform data access optimization. This limitation motivated this paper which applies strategies to support the online prediction of application behavior in order to optimize data access operations on distributed systems, without requiring any information on past executions. In order to accomplish such a goal, this approach organizes application behaviors as time series and, then, analyzes and classifies those series according to their properties. By knowing properties, the approach selects modeling techniques to represent series and perform predictions, which are, later on, used to optimize data access operations. This new approach was implemented and evaluated using the OptorSim simulator, sponsored by the LHC-CERN project and widely employed by the scientific community. Experiments confirm this new approach reduces application execution time in about 50 percent, specially when handling large amounts of data.
Resumo:
Abstract Background With the development of DNA hybridization microarray technologies, nowadays it is possible to simultaneously assess the expression levels of thousands to tens of thousands of genes. Quantitative comparison of microarrays uncovers distinct patterns of gene expression, which define different cellular phenotypes or cellular responses to drugs. Due to technical biases, normalization of the intensity levels is a pre-requisite to performing further statistical analyses. Therefore, choosing a suitable approach for normalization can be critical, deserving judicious consideration. Results Here, we considered three commonly used normalization approaches, namely: Loess, Splines and Wavelets, and two non-parametric regression methods, which have yet to be used for normalization, namely, the Kernel smoothing and Support Vector Regression. The results obtained were compared using artificial microarray data and benchmark studies. The results indicate that the Support Vector Regression is the most robust to outliers and that Kernel is the worst normalization technique, while no practical differences were observed between Loess, Splines and Wavelets. Conclusion In face of our results, the Support Vector Regression is favored for microarray normalization due to its superiority when compared to the other methods for its robustness in estimating the normalization curve.
Resumo:
Abstract Background Spotted cDNA microarrays generally employ co-hybridization of fluorescently-labeled RNA targets to produce gene expression ratios for subsequent analysis. Direct comparison of two RNA samples in the same microarray provides the highest level of accuracy; however, due to the number of combinatorial pair-wise comparisons, the direct method is impractical for studies including large number of individual samples (e.g., tumor classification studies). For such studies, indirect comparisons using a common reference standard have been the preferred method. Here we evaluated the precision and accuracy of reconstructed ratios from three indirect methods relative to ratios obtained from direct hybridizations, herein considered as the gold-standard. Results We performed hybridizations using a fixed amount of Cy3-labeled reference oligonucleotide (RefOligo) against distinct Cy5-labeled targets from prostate, breast and kidney tumor samples. Reconstructed ratios between all tissue pairs were derived from ratios between each tissue sample and RefOligo. Reconstructed ratios were compared to (i) ratios obtained in parallel from direct pair-wise hybridizations of tissue samples, and to (ii) reconstructed ratios derived from hybridization of each tissue against a reference RNA pool (RefPool). To evaluate the effect of the external references, reconstructed ratios were also calculated directly from intensity values of single-channel (One-Color) measurements derived from tissue sample data collected in the RefOligo experiments. We show that the average coefficient of variation of ratios between intra- and inter-slide replicates derived from RefOligo, RefPool and One-Color were similar and 2 to 4-fold higher than ratios obtained in direct hybridizations. Correlation coefficients calculated for all three tissue comparisons were also similar. In addition, the performance of all indirect methods in terms of their robustness to identify genes deemed as differentially expressed based on direct hybridizations, as well as false-positive and false-negative rates, were found to be comparable. Conclusion RefOligo produces ratios as precise and accurate as ratios reconstructed from a RNA pool, thus representing a reliable alternative in reference-based hybridization experiments. In addition, One-Color measurements alone can reconstruct expression ratios without loss in precision or accuracy. We conclude that both methods are adequate options in large-scale projects where the amount of a common reference RNA pool is usually restrictive.
Resumo:
Abstract Background Direct smear examination with Ziehl-Neelsen (ZN) staining for the diagnosis of pulmonary tuberculosis (PTB) is cheap and easy to use, but its low sensitivity is a major drawback, particularly in HIV seropositive patients. As such, new tools for laboratory diagnosis are urgently needed to improve the case detection rate, especially in regions with a high prevalence of TB and HIV. Objective To evaluate the performance of two in house PCR (Polymerase Chain Reaction): PCR dot-blot methodology (PCR dot-blot) and PCR agarose gel electrophoresis (PCR-AG) for the diagnosis of Pulmonary Tuberculosis (PTB) in HIV seropositive and HIV seronegative patients. Methods A prospective study was conducted (from May 2003 to May 2004) in a TB/HIV reference hospital. Sputum specimens from 277 PTB suspects were tested by Acid Fast Bacilli (AFB) smear, Culture and in house PCR assays (PCR dot-blot and PCR-AG) and their performances evaluated. Positive cultures combined with the definition of clinical pulmonary TB were employed as the gold standard. Results The overall prevalence of PTB was 46% (128/277); in HIV+, prevalence was 54.0% (40/74). The sensitivity and specificity of PCR dot-blot were 74% (CI 95%; 66.1%-81.2%) and 85% (CI 95%; 78.8%-90.3%); and of PCR-AG were 43% (CI 95%; 34.5%-51.6%) and 76% (CI 95%; 69.2%-82.8%), respectively. For HIV seropositive and HIV seronegative samples, sensitivities of PCR dot-blot (72% vs 75%; p = 0.46) and PCR-AG (42% vs 43%; p = 0.54) were similar. Among HIV seronegative patients and PTB suspects, ROC analysis presented the following values for the AFB smear (0.837), Culture (0.926), PCR dot-blot (0.801) and PCR-AG (0.599). In HIV seropositive patients, these area values were (0.713), (0.900), (0.789) and (0.595), respectively. Conclusion Results of this study demonstrate that the in house PCR dot blot may be an improvement for ruling out PTB diagnosis in PTB suspects assisted at hospitals with a high prevalence of TB/HIV.