994 resultados para PER method
Resumo:
Background: It is yet unclear if there are differences between using electronic key feature problems (KFPs) or electronic case-based multiple choice questions (cbMCQ) for the assessment of clinical decision making. Summary of Work: Fifth year medical students were exposed to clerkships which ended with a summative exam. Assessment of knowledge per exam was done by 6-9 KFPs, 9-20 cbMCQ and 9-28 MC questions. Each KFP consisted of a case vignette and three key features (KF) using “long menu” as question format. We sought students’ perceptions of the KFPs and cbMCQs in focus groups (n of students=39). Furthermore statistical data of 11 exams (n of students=377) concerning the KFPs and (cb)MCQs were compared. Summary of Results: The analysis of the focus groups resulted in four themes reflecting students’ perceptions of KFPs and their comparison with (cb)MCQ: KFPs were perceived as (i) more realistic, (ii) more difficult, (iii) more motivating for the intense study of clinical reasoning than (cb)MCQ and (iv) showed an overall good acceptance when some preconditions are taken into account. The statistical analysis revealed that there was no difference in difficulty; however KFP showed a higher discrimination and reliability (G-coefficient) even when corrected for testing times. Correlation of the different exam parts was intermediate. Conclusions: Students perceived the KFPs as more motivating for the study of clinical reasoning. Statistically KFPs showed a higher discrimination and higher reliability than cbMCQs. Take-home messages: Including KFPs with long menu questions into summative clerkship exams seems to offer positive educational effects.
Resumo:
Various airborne aldehydes and ketones (i.e., airborne carbonyls) present in outdoor, indoor, and personal air pose a risk to human health at present environmental concentrations. To date, there is no adequate, simple-to-use sampler for monitoring carbonyls at parts per billion concentrations in personal air. The Passive Aldehydes and Ketones Sampler (PAKS) originally developed for this purpose has been found to be unreliable in a number of relatively recent field studies. The PAKS method uses dansylhydrazine, DNSH, as the derivatization agent to produce aldehyde derivatives that are analyzed by HPLC with fluorescence detection. The reasons for the poor performance of the PAKS are not known but it is hypothesized that the chemical derivatization conditions and reaction kinetics combined with a relatively low sampling rate may play a role. This study evaluated the effect of absorption and emission wavelengths, pH of the DNSH coating solution, extraction solvent, and time post-extraction for the yield and stability of formaldehyde, acetaldehyde, and acrolein DNSH derivatives. The results suggest that the optimum conditions for the analysis of DNSHydrazones are the following. The excitation and emission wavelengths for HPLC analysis should be at 250nm and 500nm, respectively. The optimal pH of the coating solution appears to be pH 2 because it improves the formation of di-derivatized acrolein DNSHydrazones without affecting the response of the derivatives of the formaldehyde and acetaldehyde derivatives. Acetonitrile is the preferable extraction solvent while the optimal time to analyze the aldehyde derivatives is 72 hours post-extraction. ^
Resumo:
The Houston region is home to arguably the largest petrochemical and refining complex anywhere. The effluent of this complex includes many potentially hazardous compounds. Study of some of these compounds has led to recognition that a number of known and probable carcinogens are at elevated levels in ambient air. Two of these, benzene and 1,3-butadiene, have been found in concentrations which may pose health risk for residents of Houston.^ Recent popular journalism and publications by local research institutions has increased the interest of the public in Houston's air quality. Much of the literature has been critical of local regulatory agencies' oversight of industrial pollution. A number of citizens in the region have begun to volunteer with air quality advocacy groups in the testing of community air. Inexpensive methods exist for monitoring of ozone, particulate matter and airborne toxic ambient concentrations. This study is an evaluation of a technique that has been successfully applied to airborne toxics.^ This technique, solid phase microextraction (SPME), has been used to measure airborne volatile organic hydrocarbons at community-level concentrations. It is has yielded accurate and rapid concentration estimates at a relatively low cost per sample. Examples of its application to measurement of airborne benzene exist in the literature. None have been found for airborne 1,3-butadiene. These compounds were selected for an evaluation of SPME as a community-deployed technique, to replicate previous application to benzene, to expand application to 1,3-butadiene and due to the salience of these compounds in this community. ^ This study demonstrates that SPME is a useful technique for quantification of 1,3-butadiene at concentrations observed in Houston. Laboratory background levels precluded recommendation of the technique for benzene. One type of SPME fiber, 85 μm Carboxen/PDMS, was found to be a sensitive sampling device for 1,3-butadiene under temperature and humidity conditions common in Houston. This study indicates that these variables affect instrument response. This suggests the necessity of calibration within specific conditions of these variables. While deployment of this technique was less expensive than other methods of quantification of 1,3-butadiene, the complexity of calibration may exclude an SPME method from broad deployment by community groups.^
Resumo:
This investigation compares two different methodologies for calculating the national cost of epilepsy: provider-based survey method (PBSM) and the patient-based medical charts and billing method (PBMC&BM). The PBSM uses the National Hospital Discharge Survey (NHDS), the National Hospital Ambulatory Medical Care Survey (NHAMCS) and the National Ambulatory Medical Care Survey (NAMCS) as the sources of utilization. The PBMC&BM uses patient data, charts and billings, to determine utilization rates for specific components of hospital, physician and drug prescriptions. ^ The 1995 hospital and physician cost of epilepsy is estimated to be $722 million using the PBSM and $1,058 million using the PBMC&BM. The difference of $336 million results from $136 million difference in utilization and $200 million difference in unit cost. ^ Utilization. The utilization difference of $136 million is composed of an inpatient variation of $129 million, $100 million hospital and $29 million physician, and an ambulatory variation of $7 million. The $100 million hospital variance is attributed to inclusion of febrile seizures in the PBSM, $−79 million, and the exclusion of admissions attributed to epilepsy, $179 million. The former suggests that the diagnostic codes used in the NHDS may not properly match the current definition of epilepsy as used in the PBMC&BM. The latter suggests NHDS errors in the attribution of an admission to the principal diagnosis. ^ The $29 million variance in inpatient physician utilization is the result of different per-day-of-care physician visit rates, 1.3 for the PBMC&BM versus 1.0 for the PBSM. The absence of visit frequency measures in the NHDS affects the internal validity of the PBSM estimate and requires the investigator to make conservative assumptions. ^ The remaining ambulatory resource utilization variance is $7 million. Of this amount, $22 million is the result of an underestimate of ancillaries in the NHAMCS and NAMCS extrapolations using the patient visit weight. ^ Unit cost. The resource cost variation is $200 million, inpatient is $22 million and ambulatory is $178 million. The inpatient variation of $22 million is composed of $19 million in hospital per day rates, due to a higher cost per day in the PBMC&BM, and $3 million in physician visit rates, due to a higher cost per visit in the PBMC&BM. ^ The ambulatory cost variance is $178 million, composed of higher per-physician-visit costs of $97 million and higher per-ancillary costs of $81 million. Both are attributed to the PBMC&BM's precise identification of resource utilization that permits accurate valuation. ^ Conclusion. Both methods have specific limitations. The PBSM strengths are its sample designs that lead to nationally representative estimates and permit statistical point and confidence interval estimation for the nation for certain variables under investigation. However, the findings of this investigation suggest the internal validity of the estimates derived is questionable and important additional information required to precisely estimate the cost of an illness is absent. ^ The PBMC&BM is a superior method in identifying resources utilized in the physician encounter with the patient permitting more accurate valuation. However, the PBMC&BM does not have the statistical reliability of the PBSM; it relies on synthesized national prevalence estimates to extrapolate a national cost estimate. While precision is important, the ability to generalize to the nation may be limited due to the small number of patients that are followed. ^
Resumo:
Resumen: Se cuantifica la captura de CO2 por la flora nativa de totora (Schoenoplectus californicus) en los humedales de Villa María, sobre la costa del Pacífico en Perú. Se delimitó el área representativa ocupada por esta especie para evitar zonas heterogéneas y se cuadriculó la zona trazando líneas que atraviesen toda el área, donde se realizaron muestreos aleatorios de 1 m2 de la parte aérea y de la raíz de la biomasa. El contenido de carbono en la estructura vegetal se determinó por método de “Walkley y Black” y la captura de dióxido de carbono se estimó mediante el “factor de conversión de carbono a dióxido de carbono”. Se obtuvo un valor de contenido de dióxido de carbono capturado por totora (partes aérea +raíz) de 84.05 tCO2/ha comprobándose que, entre otros importantes servicios al medio ambiente y al Ser Humano, estos humedales actúan de modo crucial en la captación de CO2 atmosférico ante el presente escenario de cambio climático planetario.
Resumo:
A new microtiter-plate dilution method was applied during the expedition ANTARKTIS-XI/2 with RV Polarstern to determine the distribution of copiotrophic and oligotrophic bacteria in the water columns at polar fronts. Twofold serial dilutions were performed with an eight-channel Electrapette in 96-wells plates by mixing 150 µl of seawater with 150 µl of copiotrophic or olitrophic Trypticase-Broth, three times per well. After incubation of about 6 month at 2 °C, turbidities were measured with an eight-channel photometer at 405 nm and combinations of positive test results for three consecutive dilutions chosen and compared with a Most Probable Number table, calculated for 8 replicates and twofold serial dilutions. Densities of 12 to 661 cells/ml for copiotrophs, and 1 to 39 cells/ml for oligotrophs were found. Colony Forming Units on copiotrophic Trypticase-Agar were between 6 and 847 cells/ml, which is in the same range as determined with the MPN method.
Resumo:
The Actively Heated Fiber Optic (AHFO) method is shown to be capable of measuring soil water content several times per hour at 0.25 m spacing along cables of multiple kilometers in length. AHFO is based on distributed temperature sensing (DTS) observation of the heating and cooling of a buried fiber-optic cable resulting from an electrical impulse of energy delivered from the steel cable jacket. The results presented were collected from 750 m of cable buried in three 240 m colocated transects at 30, 60, and 90 cm depths in an agricultural field under center pivot irrigation. The calibration curve relating soil water content to the thermal response of the soil to a heat pulse of 10 W m−1 for 1 min duration was developed in the lab. This calibration was found applicable to the 30 and 60 cm depth cables, while the 90 cm depth cable illustrated the challenges presented by soil heterogeneity for this technique. This method was used to map with high resolution the variability of soil water content and fluxes induced by the nonuniformity of water application at the surface.
Resumo:
In a Finite Element (FE) analysis of elastic solids several items are usually considered, namely, type and shape of the elements, number of nodes per element, node positions, FE mesh, total number of degrees of freedom (dot) among others. In this paper a method to improve a given FE mesh used for a particular analysis is described. For the improvement criterion different objective functions have been chosen (Total potential energy and Average quadratic error) and the number of nodes and dof's of the new mesh remain constant and equal to the initial FE mesh. In order to find the mesh producing the minimum of the selected objective function the steepest descent gradient technique has been applied as optimization algorithm. However this efficient technique has the drawback that demands a large computation power. Extensive application of this methodology to different 2-D elasticity problems leads to the conclusion that isometric isostatic meshes (ii-meshes) produce better results than the standard reasonably initial regular meshes used in practice. This conclusion seems to be independent on the objective function used for comparison. These ii-meshes are obtained by placing FE nodes along the isostatic lines, i.e. curves tangent at each point to the principal direction lines of the elastic problem to be solved and they should be regularly spaced in order to build regular elements. That means ii-meshes are usually obtained by iteration, i.e. with the initial FE mesh the elastic analysis is carried out. By using the obtained results of this analysis the net of isostatic lines can be drawn and in a first trial an ii-mesh can be built. This first ii-mesh can be improved, if it necessary, by analyzing again the problem and generate after the FE analysis the new and improved ii-mesh. Typically, after two first tentative ii-meshes it is sufficient to produce good FE results from the elastic analysis. Several example of this procedure are presented.
Resumo:
DNA breaks occur during many processes in mammalian cells, including recombination, repair, mutagenesis and apoptosis. Here we report a simple and rapid method for assaying DNA breaks and identifying DNA breaksites. Breaksites are first tagged and amplified by ligation-mediated PCR (LM-PCR), using nested PCR primers to increase the specificity and sensitivity of amplification. Breaksites are then mapped by batch sequencing LM-PCR products. This allows easy identification of multiple breaksites per reaction without tedious fractionation of PCR products by gel electrophoresis or cloning. Breaksite batch mapping requires little starting material and can be used to identify either single- or double-strand breaks.
Resumo:
Aquesta nota mostra la tendència poblacional de la garsa (Pica pica) al llarg del kilòmetre 1 (des de la línea de costa, riu amunt) del paisatge protegit de la Desembocadura del riu Millars (Castelló), per al període comprés de 1994-2009 (16 anys). Els resultats se centren en censos realitzats a la zona mitjançant el mètode del transecte lineal duts a terme 3 o 4 vegades al mes. L’espècie se censa per primera vegada l’any 1997 i des l’aleshores mostra una tendència a l’alça sobretot a partir de 2003 i molt especialment en els dos últims (2008-2009). S’ha estabilitzat una abundància mitjana de 8,3 aus/km. Pel que fa a les correlacions amb la meteorologia, en els anys més freds presenta menor abundància que en els càlids. Els hiverns càlids podrien permetre major supervivència i, a més, major disponibilitat de recursos amb els quals assegurar un bon nombre de polls. L’espècie troba en la zona suficients recursos tròfics per sobreviure i un lloc excel·lent on lliurar-se de la pressió cinegètica.
Resumo:
Since the beginning of 3D computer vision problems, the use of techniques to reduce the data to make it treatable preserving the important aspects of the scene has been necessary. Currently, with the new low-cost RGB-D sensors, which provide a stream of color and 3D data of approximately 30 frames per second, this is getting more relevance. Many applications make use of these sensors and need a preprocessing to downsample the data in order to either reduce the processing time or improve the data (e.g., reducing noise or enhancing the important features). In this paper, we present a comparison of different downsampling techniques which are based on different principles. Concretely, five different downsampling methods are included: a bilinear-based method, a normal-based, a color-based, a combination of the normal and color-based samplings, and a growing neural gas (GNG)-based approach. For the comparison, two different models have been used acquired with the Blensor software. Moreover, to evaluate the effect of the downsampling in a real application, a 3D non-rigid registration is performed with the data sampled. From the experimentation we can conclude that depending on the purpose of the application some kernels of the sampling methods can improve drastically the results. Bilinear- and GNG-based methods provide homogeneous point clouds, but color-based and normal-based provide datasets with higher density of points in areas with specific features. In the non-rigid application, if a color-based sampled point cloud is used, it is possible to properly register two datasets for cases where intensity data are relevant in the model and outperform the results if only a homogeneous sampling is used.
Resumo:
This leather-bound volume contains a manuscript copy of Charles Morton’s Compendium Physicae copied by Harvard student Obadiah Ayer in 1708. The volume has text and drawings (including one large foldout drawing), and there is an index to the chapters at the end of the volume. Mather Byles (Harvard Class of 1725) also used the book.
Resumo:
Multicellular tumor spheroids (MCTS) are used as organotypic models of normal and solid tumor tissue. Traditional techniques for generating MCTS, such as growth on nonadherent surfaces, in suspension, or on scaffolds, have a number of drawbacks, including the need for manual selection to achieve a homogeneous population and the use of nonphysiological matrix compounds. In this study we describe a mild method for the generation of MCTS, in which individual spheroids form in hanging drops suspended from a microtiter plate. The method has been successfully applied to a broad range of cell lines and shows nearly 100% efficiency (i.e., one spheroid per drop). Using the hepatoma cell line, HepG2, the hanging drop method generated well-rounded MCTS with a narrow size distribution (coefficient of variation [CV] 10% to 15%, compared with 40% to 60% for growth on nonadherent surfaces). Structural analysis of HepG2 and a mammary gland adenocarcinoma cell line, MCF-7, composed spheroids, revealed highly organized, three-dimensional, tissue-like structures with an extensive extracellular matrix. The hanging drop method represents an attractive alternative for MCTS production, because it is mild, can be applied to a wide variety of cell lines, and can produce spheroids of a homogeneous size without the need for sieving or manual selection. The method has applications for basic studies of physiology and metabolism, tumor biology, toxicology, cellular organization, and the development of bioartificial tissue. (C) 2003 Wiley Periodicals, Inc.
Resumo:
Distortional buckling, unlike the usual lateral-torsional buckling in which the cross-section remains rigid in its own plane, involves distortion of web in the cross-section. This type of buckling typically occurs in beams with slender web and stocky flanges. Most of the published studies assume the web to deform with a cubic shape function. As this assumption may limit the accuracy of the results, a fifth order polynomial is chosen here for the web displacements. The general line-type finite element model used here has two nodes and a maximum of twelve degrees of freedom per node. The model not only can predict the correct coupled mode but also is capable of handling the local buckling of the web.
Resumo:
La presente Tesi ha per oggetto lo sviluppo e la validazione di nuovi criteri per la verifica a fatica multiassiale di componenti strutturali metallici . In particolare, i nuovi criteri formulati risultano applicabili a componenti metallici, soggetti ad un’ampia gamma di configurazioni di carico: carichi multiassiali variabili nel tempo, in modo ciclico e random, per alto e basso/medio numero di cicli di carico. Tali criteri costituiscono un utile strumento nell’ambito della valutazione della resistenza/vita a fatica di elementi strutturali metallici, essendo di semplice implementazione, e richiedendo tempi di calcolo piuttosto modesti. Nel primo Capitolo vengono presentate le problematiche relative alla fatica multiassiale, introducendo alcuni aspetti teorici utili a descrivere il meccanismo di danneggiamento a fatica (propagazione della fessura e frattura finale) di componenti strutturali metallici soggetti a carichi variabili nel tempo. Vengono poi presentati i diversi approcci disponibili in letteratura per la verifica a fatica multiassiale di tali componenti, con particolare attenzione all'approccio del piano critico. Infine, vengono definite le grandezze ingegneristiche correlate al piano critico, utilizzate nella progettazione a fatica in presenza di carichi multiassiali ciclici per alto e basso/medio numero di cicli di carico. Il secondo Capitolo è dedicato allo sviluppo di un nuovo criterio per la valutazione della resistenza a fatica di elementi strutturali metallici soggetti a carichi multiassiali ciclici e alto numero di cicli. Il criterio risulta basato sull'approccio del piano critico ed è formulato in termini di tensioni. Lo sviluppo del criterio viene affrontato intervenendo in modo significativo su una precedente formulazione proposta da Carpinteri e collaboratori nel 2011. In particolare, il primo intervento riguarda la determinazione della giacitura del piano critico: nuove espressioni dell'angolo che lega la giacitura del piano critico a quella del piano di frattura vengono implementate nell'algoritmo del criterio. Il secondo intervento è relativo alla definizione dell'ampiezza della tensione tangenziale e un nuovo metodo, noto come Prismatic Hull (PH) method (di Araújo e collaboratori), viene implementato nell'algoritmo. L'affidabilità del criterio viene poi verificata impiegando numerosi dati di prove sperimentali disponibili in letteratura. Nel terzo Capitolo viene proposto un criterio di nuova formulazione per la valutazione della vita a fatica di elementi strutturali metallici soggetti a carichi multiassiali ciclici e basso/medio numero di cicli. Il criterio risulta basato sull'approccio del piano critico, ed è formulato in termini di deformazioni. In particolare, la formulazione proposta trae spunto, come impostazione generale, dal criterio di fatica multiassiale in regime di alto numero di cicli discusso nel secondo Capitolo. Poiché in presenza di deformazioni plastiche significative (come quelle caratterizzanti la fatica per basso/medio numero di cicli di carico) è necessario conoscere il valore del coefficiente efficace di Poisson del materiale, vengono impiegate tre differenti strategie. In particolare, tale coefficiente viene calcolato sia per via analitica, che per via numerica, che impiegando un valore costante frequentemente adottato in letteratura. Successivamente, per validarne l'affidabilità vengono impiegati numerosi dati di prove sperimentali disponibili in letteratura; i risultati numerici sono ottenuti al variare del valore del coefficiente efficace di Poisson. Inoltre, al fine di considerare i significativi gradienti tensionali che si verificano in presenza di discontinuità geometriche, come gli intagli, il criterio viene anche esteso al caso dei componenti strutturali intagliati. Il criterio, riformulato implementando il concetto del volume di controllo proposto da Lazzarin e collaboratori, viene utilizzato per stimare la vita a fatica di provini con un severo intaglio a V, realizzati in lega di titanio grado 5. Il quarto Capitolo è rivolto allo sviluppo di un nuovo criterio per la valutazione del danno a fatica di elementi strutturali metallici soggetti a carichi multiassiali random e alto numero di cicli. Il criterio risulta basato sull'approccio del piano critico ed è formulato nel dominio della frequenza. Lo sviluppo del criterio viene affrontato intervenendo in modo significativo su una precedente formulazione proposta da Carpinteri e collaboratori nel 2014. In particolare, l’intervento riguarda la determinazione della giacitura del piano critico, e nuove espressioni dell'angolo che lega la giacitura del piano critico con quella del piano di frattura vengono implementate nell'algoritmo del criterio. Infine, l’affidabilità del criterio viene verificata impiegando numerosi dati di prove sperimentali disponibili in letteratura.