955 resultados para Application level
Resumo:
It is becoming clear that if we are to impact the rate of medical errors it will have to be done at the practicing physician level. The purpose of this project was to survey the attitude of physicians in Alabama concerning their perception of medical error, and to obtain their thoughts and desires for medical education in the area of medical errors. The information will be used in the development of a physician education program.
Resumo:
Evidence for an RNA gain-of-function toxicity has now been provided for an increasing number of human pathologies. Myotonic dystrophies (DM) belong to a class of RNA-dominant diseases that result from RNA repeat expansion toxicity. Specifically, DM of type 1 (DM1), is caused by an expansion of CUG repeats in the 3'UTR of the DMPK protein kinase mRNA, while DM of type 2 (DM2) is linked to an expansion of CCUG repeats in an intron of the ZNF9 transcript (ZNF9 encodes a zinc finger protein). In both pathologies the mutant RNA forms nuclear foci. The mechanisms that underlie the RNA pathogenicity seem to be rather complex and not yet completely understood. Here, we describe Drosophila models that might help unravelling the molecular mechanisms of DM1-associated CUG expansion toxicity. We generated transgenic flies that express inducible repeats of different type (CUG or CAG) and length (16, 240, 480 repeats) and then analyzed transgene localization, RNA expression and toxicity as assessed by induced lethality and eye neurodegeneration. The only line that expressed a toxic RNA has a (CTG)(240) insertion. Moreover our analysis shows that its level of expression cannot account for its toxicity. In this line, (CTG)(240.4), the expansion inserted in the first intron of CG9650, a zinc finger protein encoding gene. Interestingly, CG9650 and (CUG)(240.4) expansion RNAs were found in the same nuclear foci. In conclusion, we suggest that the insertion context is the primary determinant for expansion toxicity in Drosophila models. This finding should contribute to the still open debate on the role of the expansions per se in Drosophila and in human pathogenesis of RNA-dominant diseases.
Resumo:
The isotopic abundance of 85Kr in the atmosphere, currently at the level of 10−11, has increased by orders of magnitude since the dawn of nuclear age. With a half-life of 10.76 years, 85Kr is of great interest as tracers for environmental samples such as air, groundwater and ice. Atom Trap Trace Analysis (ATTA) is an emerging method for the analysis of rare krypton isotopes at isotopic abundance levels as low as 10−14 using krypton gas samples of a few micro-liters. Both the reliability and reproducibility of the method are examined in the present study by an inter-comparison among different instruments. The 85Kr/Kr ratios of 12 samples, in the range of 10−13 to 10−10, are measured independently in three laboratories: a low-level counting laboratory in Bern, Switzerland, and two ATTA laboratories, one in Hefei, China, and another in Argonne, USA. The results are in agreement at the precision level of 5%.
Resumo:
Although gastrointestinal stromal tumor (GIST) is effectively treated with imatinib, there are a number of clinical challenges in the optimal treatment of these patients. The plasma steady-state trough level of imatinib has been proposed to correlate with clinical outcome. Plasma imatinib level may be affected by a number of patient characteristics. Additionally, the ideal plasma trough concentration of imatinib is likely to vary based on the KIT genotype (genotype determines imatinib binding affinity) of the individual patient. Patients’ genotype or plasma imatinib level may influence the type and duration of response that is appreciable by clinical evaluation. The objectives of this study were to determine effects of genotype on the type of response appreciable by current imaging criteria, to determine the distribution of plasma imatinib levels in patients with GIST, to determine factors that correlate with plasma imatinib level, to determine the incremental effects of imatinib dose escalation; and to explore the median plasma levels and outcomes of patients with various KIT mutations. We therefore obtained KIT mutation information and analyzed CT response for size and density measurement of GISTs at baseline and within the first four moths of imatinib treatment. In 126 patients with metastatic/unresectable disease, the KIT genotype of patients’ tumor was significantly associated with unique response characteristics measurable by CT. Furthermore, hepatic and peritoneal metastases differed in their response characteristics. A subgroup of patients with KIT exon 9 mutation, who received higher doses of imatinib and experienced higher trough imatinib levels, experienced improved progression-free survival similar to that of KIT exon 11 patients. Therefore, we have found that imatinib plasma levels were higher in patients with elevated Aspartate amino transferase, were women, were older, or were being treated concomitantly with CYP450 substrate drugs. As expected, CYP450 inducers correlated with a lower plasma imatinib levels in GIST patients. Renal metabolism of imatinib accounts for <10%, so it was not included in the analysis but may affect covariates. Interestingly, there was a trend for low imatinib levels and inferior progression-free survival in patients who had undergone complete gastrectomy. Patients with KIT exon 9 mutation in our cohort received higher imatinib doses, experienced higher trough imatinib levels, and experienced a PFS similar to that of KIT exon 11 patients. In conclusion, imatinib plasma levels are influenced by a number of patient characteristics. The optimal imatinib plasma level for individual patients is not known but is an area of intense investigation. Our study confirms patients with KIT exon 9 mutations benefit from high-dose imatinib and higher trough imatinib levels.
Resumo:
The paper deals with batch scheduling problems in process industries where final products arise from several successive chemical or physical transformations of raw materials using multi–purpose equipment. In batch production mode, the total requirements of intermediate and final products are partitioned into batches. The production start of a batch at a given level requires the availability of all input products. We consider the problem of scheduling the production of given batches such that the makespan is minimized. Constraints like minimum and maximum time lags between successive production levels, sequence–dependent facility setup times, finite intermediate storages, production breaks, and time–varying manpower contribute to the complexity of this problem. We propose a new solution approach using models and methods of resource–constrained project scheduling, which (approximately) solves problems of industrial size within a reasonable amount of time.
Resumo:
The regulation of nanomaterials is being discussed at various levels. This article offers a historical description of governmental activities concerning the safety of nanomaterials at the United Nations (UN) level since 2006, with a focus on the UN Strategic Approach to International Chemicals Management (SAICM). The outcomes of the SAICM process were a nanospecific resolution and the addition of new activities on nanotechnologies and manufactured nanomaterials to the SAICM’s Global Plan of Action. The article discusses the implications of these decisions for multilateral environmental agreements. In addition, it studies the consequences of the regulation of nanotechnologies activities on trade governance, in particular the relationship between the SAICM to the legally binding World Trade Organization (WTO) agreements (notably the General Agreement on Tariffs and Trade and the Agreement on Technical Barriers to Trade). The article concludes that the SAICM decisions on manufactured nanomaterials are compatible with WTO law.
Resumo:
In numerous intervention studies and education field trials, random assignment to treatment occurs in clusters rather than at the level of observation. This departure of random assignment of units may be due to logistics, political feasibility, or ecological validity. Data within the same cluster or grouping are often correlated. Application of traditional regression techniques, which assume independence between observations, to clustered data produce consistent parameter estimates. However such estimators are often inefficient as compared to methods which incorporate the clustered nature of the data into the estimation procedure (Neuhaus 1993).1 Multilevel models, also known as random effects or random components models, can be used to account for the clustering of data by estimating higher level, or group, as well as lower level, or individual variation. Designing a study, in which the unit of observation is nested within higher level groupings, requires the determination of sample sizes at each level. This study investigates the design and analysis of various sampling strategies for a 3-level repeated measures design on the parameter estimates when the outcome variable of interest follows a Poisson distribution. ^ Results study suggest that second order PQL estimation produces the least biased estimates in the 3-level multilevel Poisson model followed by first order PQL and then second and first order MQL. The MQL estimates of both fixed and random parameters are generally satisfactory when the level 2 and level 3 variation is less than 0.10. However, as the higher level error variance increases, the MQL estimates become increasingly biased. If convergence of the estimation algorithm is not obtained by PQL procedure and higher level error variance is large, the estimates may be significantly biased. In this case bias correction techniques such as bootstrapping should be considered as an alternative procedure. For larger sample sizes, those structures with 20 or more units sampled at levels with normally distributed random errors produced more stable estimates with less sampling variance than structures with an increased number of level 1 units. For small sample sizes, sampling fewer units at the level with Poisson variation produces less sampling variation, however this criterion is no longer important when sample sizes are large. ^ 1Neuhaus J (1993). “Estimation efficiency and Tests of Covariate Effects with Clustered Binary Data”. Biometrics , 49, 989–996^
Resumo:
Here, by the example of the transfer of cultivated plants in the context of the correspondence networks of Albrecht von Haller and the Economic Society, a multi-level network analysis is suggested. By a multi-level procedure, the chronological dynamics, the social structure, the spatial distribution and the functional networking are analyzed one after the other. These four levels of network analysis do not compete with each other but are mutually supporting. This aims at a deeper understanding of how these networks contributed to an international transfer of knowledge in the 18th century.
Resumo:
The use of group-randomized trials is particularly widespread in the evaluation of health care, educational, and screening strategies. Group-randomized trials represent a subset of a larger class of designs often labeled nested, hierarchical, or multilevel and are characterized by the randomization of intact social units or groups, rather than individuals. The application of random effects models to group-randomized trials requires the specification of fixed and random components of the model. The underlying assumption is usually that these random components are normally distributed. This research is intended to determine if the Type I error rate and power are affected when the assumption of normality for the random component representing the group effect is violated. ^ In this study, simulated data are used to examine the Type I error rate, power, bias and mean squared error of the estimates of the fixed effect and the observed intraclass correlation coefficient (ICC) when the random component representing the group effect possess distributions with non-normal characteristics, such as heavy tails or severe skewness. The simulated data are generated with various characteristics (e.g. number of schools per condition, number of students per school, and several within school ICCs) observed in most small, school-based, group-randomized trials. The analysis is carried out using SAS PROC MIXED, Version 6.12, with random effects specified in a random statement and restricted maximum likelihood (REML) estimation specified. The results from the non-normally distributed data are compared to the results obtained from the analysis of data with similar design characteristics but normally distributed random effects. ^ The results suggest that the violation of the normality assumption for the group component by a skewed or heavy-tailed distribution does not appear to influence the estimation of the fixed effect, Type I error, and power. Negative biases were detected when estimating the sample ICC and dramatically increased in magnitude as the true ICC increased. These biases were not as pronounced when the true ICC was within the range observed in most group-randomized trials (i.e. 0.00 to 0.05). The normally distributed group effect also resulted in bias ICC estimates when the true ICC was greater than 0.05. However, this may be a result of higher correlation within the data. ^
Resumo:
Extensive experience with the analysis of human prophase chromosomes and studies into the complexity of prophase GTG-banding patterns have suggested that at least some prophase chromosomal segments can be accurately identified and characterized independently of the morphology of the chromosome as a whole. In this dissertation the feasibility of identifying and analyzing specified prophase chromosome segments was thus investigated as an alternative approach to prophase chromosome analysis based on whole chromosome recognition. Through the use of prophase idiograms at the 850-band-stage (FRANCKE, 1981) and a comparison system based on the calculation of cross-correlation coefficients between idiogram profiles, we have demonstrated that it is possible to divide the 24 human prophase idiograms into a set of 94 unique band sequences. Each unique band sequence has a banding pattern that is recognizable and distinct from any other non-homologous chromosome portion.^ Using chromosomes 11p and 16 thru 22 to demonstrate unique band sequence integrity at the chromosome level, we found that prophase chromosome banding pattern variation can be compensated for and that a set of unique band sequences very similar to those at the idiogram level can be identified on actual chromosomes.^ The use of a unique band sequence approach in prophase chromosome analysis is expected to increase efficiency and sensitivity through more effective use of available banding information. The use of a unique band sequence approach to prophase chromosome analysis is discussed both at the routine level by cytogeneticists and at an image processing level with a semi-automated approach to prophase chromosome analysis. ^
Resumo:
The closed Tangra Yumco Basin underwent the strongest Quaternary lake-level changes so far recorded on the Tibetan Plateau. It was hitherto unknown what effect this had on local Holocene vegetation development. A 3.6-m sediment core from a recessional lake terrace at 4,700 m a.s.l., 160 m above the present lake level of Tangra Yumco, was studied to reconstruct Holocene flooding phases (sedimentology and ostracod analyses), vegetation dynamics and human influence (palynology, charcoal and coprophilous fungi analyses). Peat at the base of the profile proves lake level was below 4,700 m a.s.l. during the Pleistocene/Holocene transition. A deep-lake phase started after 11 cal ka BP, but the ostracod record indicates the level was not higher than similar to 4,720 m a.s.l. (180 m above present) and decreased gradually after the early Holocene maximum. Additional sediment ages from the basin suggest recession of Tangra Yumco from the coring site after 2.6 cal ka BP, with a shallow local lake persisting at the site until similar to 1 cal ka BP. The final peat formation indicates drier conditions thereafter. Persistence of Artemisia steppe during the Holocene lake high-stand resembles palynological records from west Tibet that indicate early Holocene aridity, in spite of high lake levels that may have resulted from meltwater input. Yet pollen assemblages indicate humidity closer to that of present potential forest areas near Lhasa, with 500-600 mm annual precipitation. Thus, the early mid-Holocene humidity was sufficient to sustain at least juniper forest, but Artemisia dominance persisted as a consequence of a combination of environmental disturbances such as (1) strong early Holocene climate fluctuations, (2) inundation of habitats suitable for forest, (3) extensive water surfaces that served as barriers to terrestrial diaspore transport from refuge areas, (4) strong erosion that denuded the non-flooded upper slopes and (5) increasing human influence since the late glacial.
Resumo:
The Convention on the Protection and Promotion of the Diversity of Cultural Expressions, adopted under the auspices of the United Nations Educational, Cultural and Scientific Organization (UNESCO) in 2005, entered into force on 18 March 2007 after an incredibly swift ratification process. The Convention is the culmination of multiple-track efforts that spread over many years with the objective of providing a binding instrument for the protection and promotion of cultural diversity at the international level. These efforts, admirable as they may be, are not however isolated undertakings of goodwill, but a reaction to economic globalisation, whose advancement has been significantly furthered by the emergence of enforceable multilateral trade rules. These very rules, whose bearer is the World Trade Organization (WTO), have been perceived as the antipode to "culture" and have commanded the formulation of counteracting norms that may sufficiently "protect" and "promote" it. Against this backdrop of institutional tension and fragmentation, the present chapter explicates the emergence of the concept of cultural diversity on the international policy- and law-making scene and its legal dimensions given by the new UNESCO Convention. It critically analyses the Convention's provisions, in particular the rights and obligations of the State Parties, and asks whether indeed the UNESCO Convention provides a sufficient and appropriate basis for the protection and promotion of a thriving and diverse cultural environment.
Resumo:
Orbital blunt trauma is common, and the diagnosis of a fracture should be made by computed tomographic (CT) scan. However, this will expose patients to ionising radiation. Our objective was to identify clinical predictors of orbital fracture, in particular the presence of a black eye, to minimise unnecessary exposure to radiation. A 10-year retrospective study was made of the medical records of all patients with minor head trauma who presented with one or two black eyes to our emergency department between May 2000 and April 2010. Each of the patients had a CT scan, was over 16 years old, and had a Glasgow Coma Score (GCS) of 13-15. The primary outcome was whether the black eye was a valuable predictor of a fracture. Accompanying clinical signs were considered as a secondary outcome. A total of 1676 patients (mean (SD) age 51 (22) years) and minor head trauma with either one or two black eyes were included. In 1144 the CT scan showed a fracture of the maxillofacial skeleton, which gave an incidence of 68.3% in whom a black eye was the obvious symptom. Specificity for facial fractures was particularly high for other clinical signs, such as diminished skin sensation (specificity 96.4%), diplopia or occulomotility disorders (89.3%), fracture steps (99.8%), epistaxis (95.5%), subconjunctival haemorrhage (90.4%), and emphysema (99.6%). Sensitivity for the same signs ranged from 10.8% to 22.2%. The most striking fact was that 68.3% of all patients with a black eye had an underlying fracture. We therefore conclude that a CT scan should be recommended for every patient with minor head injury who presents with a black eye.