925 resultados para Need of automated grading
Resumo:
An experiment was conducted using 95 Continental crossbred steers. The cattle were sorted by ultrasound 160 days before slaughter into a low backfat group (Low BF) and a higher backfat group (High BF). Half of the Low BF and half of the High BF were implanted whereas the other halves were not. Data from the experiment were used in two hypothetical markets. One market was a high yield beef program (HY) that did not allow the use of implants. The second market was a commodity beef program (CM) that allowed the use of implants. The cattle were priced as an unsorted group (ALL) and two sorted groups (Low BF and High BF) within the HY (non-implanted) and CM (implanted) markets. The CM program had a base price of $1.05/lb hot carcass weight (HCW) with a $0.15/lb HCW discount for quality grade (QG) Select and a $0.20/lb HCW discount for yield grade (YG) 4. The HY program used a base price of $1.07/lb HCW with premiums ($/lb HCW) paid for YG £ .9 (.15), 1.0 - 1.4 (.10), and 1.5 - 1.9 (.03). The carcasses were discounted ($/lb HCW) for YG 2.5 - 2.9 (.03), 3.0 - 3.9 (.15), and ³ 4.0 (.35). This data set provides good evidence that the end point at which to sell a group of cattle depends on the particular market. Sorting had an economic advantage over ALL in the HY Low BF and the CM High BF groups. The HY High BF cattle should have been sold sooner due to the discounts recieved for increased YG. The increased YG was directly affected by an increase in BF. Furthermore, the CM Low BF group should have been fed longer to increase the number of carcasses grading Choice.
Resumo:
A three-year study was conducted to integrate pasturing systems with drylot feeding systems. Each year 84 fall-born and 28 spring-born calves of similar genotypes were used. Fall-born calves were started on test in May, and spring-born calves were started in October. Seven treatments were imposed: 1) fall-born calves directly into the feedlot (28 steers); 2 and 3) fall-born calves put on pasture with or without an ionophore and moved to the feedlot at the end of July (14 steers in each treatment); 4 and 5) fall-born calves put on pasture with or without an ionophore and moved to the feedlot at the end of October (14 steers in each treatment); and 6 and 7) spring-born calves put on pasture with or without an ionophore and moved to the feedlot at the end of October (14 steers in each treatment). Cattle on pasture receiving an ionophore gained faster (P=.009), but lost this advantage in drylot (P>.10). Overall, cattle started directly in the feedlot had higher gains (P<.001). Cattle receiving an ionophore on pasture had lower KPH than those that did not receive an ionophore (P<.01). Treatment influenced yield grade (P<0.001), although all treatments were YG 2. The percentage of cattle grading Prime and Choice was 75 % or higher for all treatment groups. The results show that using an ionophore improved pasture gains and that pasture treatments did not adversely influence yield and quality grades.
Resumo:
A study was conducted to evaluate early weaning of beef calves at 60-70 days of age on feedlot performance and carcass characteristics. One hundred twenty steer calves sired by either Simmental or Angus sires were weaned at an average age of 67 (early weaned, EW) or 147 (late weaned, LW) days. Calves were allotted to 16 feedlot pens by weaning treatment and sire breed at approximately 750-800 lb. EW calves were heavier (P < .05) in initial feedlot weight. There were no differences due to weaning age on daily gain, dry matter intake, feed efficiency or slaughter weights. Simmental steers required more days on feed than Angus steers (P < .05). Early-weaned calves had a higher percent intramuscular fat (5.7 vs. 5.1%), higher average marbling scores (Small78 vs. Small20, P < .05), a higher percentage of cattle grading average USDA Choice and higher (38% vs. 14%, P < .05) and a higher percentage of USDA Prime (10% vs. 0%, P < .05). These data confirm observations in previous studies that early weaning and placing calves on a higher grain diet improves marbling at slaughter. In this study, the effect was shown in calves weaned at an average of 67 days.
Resumo:
Assessing the ecological requirements of species coexisting within a community is an essential requisite for developing sound conservation action. A particularly interesting question is what mechanisms govern the stable coexistence of cryptic species within a community, i.e. species that are almost impossible to distinguish. Resource partitioning theory predicts that cryptic species, like other sympatric taxa, will occupy distinct ecological niches. This prediction is widely inferred from eco-morphological studies. A new cryptic long-eared bat species, Plecotus macrobullaris, has been recently discovered in the complex of two other species present in the European Alps, with even evidence for a few mixed colonies. This discovery poses challenges to bat ecologists concerned with planning conservation measures beyond roost protection. We therefore tested whether foraging habitat segregation occurred among the three cryptic Plecotus bat species in Switzerland by radiotracking 24 breeding female bats (8 of each species). We compared habitat features at locations visited by a bat versus random locations within individual home ranges, applying mixed effects logistic regression. Distinct, species-specific habitat preferences were revealed. P. auritus foraged mostly within traditional orchards in roost vicinity, with a marked preference for habitat heterogeneity. P. austriacus foraged up to 4.7 km from the roost, selecting mostly fruit tree plantations, hedges and tree lines. P. macrobullaris preferred patchy deciduous and mixed forests with high vertical heterogeneity in a grassland dominated-matrix. These species-specific habitat preferences should inform future conservation programmes. They highlight the possible need of distinct conservation measures for species that look very much alike.
Resumo:
Internet of Things based systems are anticipated to gain widespread use in industrial applications. Standardization efforts, like 6L0WPAN and the Constrained Application Protocol (CoAP) have made the integration of wireless sensor nodes possible using Internet technology and web-like access to data (RESTful service access). While there are still some open issues, the interoperability problem in the lower layers can now be considered solved from an enterprise software vendors' point of view. One possible next step towards integration of real-world objects into enterprise systems and solving the corresponding interoperability problems at higher levels is to use semantic web technologies. We introduce an abstraction of real-world objects, called Semantic Physical Business Entities (SPBE), using Linked Data principles. We show that this abstraction nicely fits into enterprise systems, as SPBEs allow a business object centric view on real-world objects, instead of a pure device centric view. The interdependencies between how currently services in an enterprise system are used and how this can be done in a semantic real-world aware enterprise system are outlined, arguing for the need of semantic services and semantic knowledge repositories. We introduce a lightweight query language, which we use to perform a quantitative analysis of our approach to demonstrate its feasibility.
Resumo:
In this paper we address the issue of who is most likely to participate in further training, for what reasons and at what stage of the life course. Special emphasis is given to the impact of labour-market policies to encourage further education and a person's individual or cohort possibilities to participate in further education. We apply a Cox proportional hazard model to data from the West German Life History Study, separately for women and men, within and outside the firm. Younger cohorts show not only higher proportions of participation in further education and training at early stages of the life course, they also continue to participate in higher numbers during later stages of the life course. General labour-force participation reduces and tenure with the same firm increases the propensity to participate in further education and training. Contrary to expectations, in Germany labour-market segmentation has been enhanced rather than reduced by further education and training policies, since in the firm-specific labour-market segment, i.e. skilled jobs in large firms, and in the public sector both women and men had a higher probability of participation. Particularly favourable conditions for participation in further education outside the firm prevailed during the first years of the labour promotion act (Arbeitsförderungsgesetz) between 1969 and 1974, but women did not benefit to the same extent as men. Training policies are, therefore, in need of continuous assessment based on a goal-achievement evaluation to avoid any unintended effects of such policies.
Resumo:
BACKGROUND Despite the increasing interest in medical education in the German-speaking countries, there is currently no information available on the challenges which medical educators face. To address this problem, we carried out a web-based survey among the members of the Association for Medical Education (Gesellschaft für medizinische Ausbildung, GMA). METHODS A comprehensive survey was carried out on the need for further qualifications, expertise and the general conditions of medical educators in Germany. As part of this study, the educators were asked to list the three main challenges which they faced and which required urgent improvement. The results were analysed by means of qualitative content analysis. RESULTS The questionnaire was completed by 147 of the 373 members on the GMA mailing list (response rate: 39%). The educators named a total of 346 challenges and emphasised the following areas: limited academic recognition for engagement in teaching (53.5% of educators), insufficient institutional (31.5%) and financial support (28.4%), a curriculum in need of reform (22.8%), insufficient time for teaching assignments (18,9%), inadequate teacher competence in teaching methods (18.1%), restricted faculty development programmes (18.1%), limited networking within the institution (11.0%), lack of teaching staff (10.2%), varying preconditions of students (8.7%), insufficient recognition and promotion of medical educational research (5.5%), extensive assessment requirements (4.7%), and the lack of role models within medical education (3.2%). CONCLUSION The medical educators found the biggest challenges which they faced to be limited academic recognition and insufficient institutional and financial support. Consequently, improvements should be implemented to address these issues.
Resumo:
BACKGROUND: Early detection of colorectal cancer through timely follow-up of positive Fecal Occult Blood Tests (FOBTs) remains a challenge. In our previous work, we found 40% of positive FOBT results eligible for colonoscopy had no documented response by a treating clinician at two weeks despite procedures for electronic result notification. We determined if technical and/or workflow-related aspects of automated communication in the electronic health record could lead to the lack of response. METHODS: Using both qualitative and quantitative methods, we evaluated positive FOBT communication in the electronic health record of a large, urban facility between May 2008 and March 2009. We identified the source of test result communication breakdown, and developed an intervention to fix the problem. Explicit medical record reviews measured timely follow-up (defined as response within 30 days of positive FOBT) pre- and post-intervention. RESULTS: Data from 11 interviews and tracking information from 490 FOBT alerts revealed that the software intended to alert primary care practitioners (PCPs) of positive FOBT results was not configured correctly and over a third of positive FOBTs were not transmitted to PCPs. Upon correction of the technical problem, lack of timely follow-up decreased immediately from 29.9% to 5.4% (p<0.01) and was sustained at month 4 following the intervention. CONCLUSION: Electronic communication of positive FOBT results should be monitored to avoid limiting colorectal cancer screening benefits. Robust quality assurance and oversight systems are needed to achieve this. Our methods may be useful for others seeking to improve follow-up of FOBTs in their systems.
Resumo:
Background: Emergency devices for pelvic ring stabilization include circumferential sheets, pelvic binders, and c-clamps. Our knowledge of the outcome of these techniques is currently based on limited information. Methods: Using the dataset of the German Pelvic Trauma Registry, demographic and injury-associated characteristics as well as the outcome of pelvic fracture patients after sheet, binder, and c-clamp treatment was compared. Outcome parameters included transfusion requirement of packed red blood cells, length of hospital stay, mortality, and incidence of lethal pelvic bleeding. Results: Two hundred seven of 6137 (3.4%) patients documented in the German Pelvic Trauma Registry between April 30th 2004 and January 19th 2012 were treated by sheets, binders, or c-clamps. In most cases, c-clamps (69%) were used, followed by sheets (16%), and binders (15%). The median age was significantly lower in patients treated with binders than in patients treated with sheets or c-clamps (26 vs. 47 vs. 42 years, p = 0.01). Sheet wrapping was associated with a significantly higher incidence of lethal pelvic bleeding compared to binder or c-clamp stabilization (23% vs. 4% vs. 8%). No significant differences between the study groups were found in sex, fracture type, blood haemoglobin concentration, arterial blood pressure, Injury Severity Score, the incidence of additional pelvic packing and arterial embolization, need of red blood cell transfusion, length of hospitalisation, and mortality. Conclusions: The data suggest that emergency stabilization of the pelvic ring by binders and c-clamps is associated with a lower incidence of lethal pelvic bleeding compared to sheet wrapping.
Resumo:
The treatment of peri-prosthetic joint infection (PJI) of the ankle is not standardised. It is not clear whether an algorithm developed for hip and knee PJI can be used in the management of PJI of the ankle. We evaluated the outcome, at two or more years post-operatively, in 34 patients with PJI of the ankle, identified from a cohort of 511 patients who had undergone total ankle replacement. Their median age was 62.1 years (53.3 to 68.2), and 20 patients were women. Infection was exogenous in 28 (82.4%) and haematogenous in six (17.6%); 19 (55.9%) were acute infections and 15 (44.1%) chronic. Staphylococci were the cause of 24 infections (70.6%). Surgery with retention of one or both components was undertaken in 21 patients (61.8%), both components were replaced in ten (29.4%), and arthrodesis was undertaken in three (8.8%). An infection-free outcome with satisfactory function of the ankle was obtained in 23 patients (67.6%). The best rate of cure followed the exchange of both components (9/10, 90%). In the 21 patients in whom one or both components were retained, four had a relapse of the same infecting organism and three had an infection with another organism. Hence the rate of cure was 66.7% (14 of 21). In these 21 patients, we compared the treatment given to an algorithm developed for the treatment of PJI of the knee and hip. In 17 (80.9%) patients, treatment was not according to the algorithm. Most (11 of 17) had only one criterion against retention of one or both components. In all, ten of 11 patients with severe soft-tissue compromise as a single criterion had a relapse-free survival. We propose that the treatment concept for PJI of the ankle requires adaptation of the grading of quality of the soft tissues. Cite this article: Bone Joint J 2014;96-B:772-7.
Resumo:
BACKGROUND AND PURPOSE Reproducible segmentation of brain tumors on magnetic resonance images is an important clinical need. This study was designed to evaluate the reliability of a novel fully automated segmentation tool for brain tumor image analysis in comparison to manually defined tumor segmentations. METHODS We prospectively evaluated preoperative MR Images from 25 glioblastoma patients. Two independent expert raters performed manual segmentations. Automatic segmentations were performed using the Brain Tumor Image Analysis software (BraTumIA). In order to study the different tumor compartments, the complete tumor volume TV (enhancing part plus non-enhancing part plus necrotic core of the tumor), the TV+ (TV plus edema) and the contrast enhancing tumor volume CETV were identified. We quantified the overlap between manual and automated segmentation by calculation of diameter measurements as well as the Dice coefficients, the positive predictive values, sensitivity, relative volume error and absolute volume error. RESULTS Comparison of automated versus manual extraction of 2-dimensional diameter measurements showed no significant difference (p = 0.29). Comparison of automated versus manual segmentation of volumetric segmentations showed significant differences for TV+ and TV (p<0.05) but no significant differences for CETV (p>0.05) with regard to the Dice overlap coefficients. Spearman's rank correlation coefficients (ρ) of TV+, TV and CETV showed highly significant correlations between automatic and manual segmentations. Tumor localization did not influence the accuracy of segmentation. CONCLUSIONS In summary, we demonstrated that BraTumIA supports radiologists and clinicians by providing accurate measures of cross-sectional diameter-based tumor extensions. The automated volume measurements were comparable to manual tumor delineation for CETV tumor volumes, and outperformed inter-rater variability for overlap and sensitivity.
Resumo:
CZE-based assays for carbohydrate-deficient transferrin (CDT) in which serum is mixed with an Fe(III) ion-containing solution prior to analysis are effective approaches for the determination of CDT in patient samples. Sera of patients with progressed diseases, however, are prone to interferences comigrating with transferrin (Tf) that prevent the proper determination of CDT by CZE in these samples. The need of a simple and economic approach to immunoextract Tf from human serum prompted us to investigate the use of a laboratory-made anti-Tf spin column containing polyclonal rabbit anti-human Tf antibodies linked to Sepharose 4 Fast Flow beads. This article reports extraction column manufacturing and column characterization with sera having normal and elevated CDT levels. The developed procedure was applied to a number of relevant hepatology and dialysis patient samples and could thereby be shown to represent an effective method for extraction and concentration of all Tf isoforms. Furthermore, lipemic sera were delipidated using a mixture of diisopropyl ether and butanol prior to immunoextraction. CDT could unambiguously be determined in all pretreated samples.
Resumo:
BACKGROUND Cardiac events (CEs) are among the most serious late effects following childhood cancer treatment. To establish accurate risk estimates for the occurrence of CEs it is essential that they are graded in a valid and consistent manner, especially for international studies. We therefore developed a data-extraction form and a set of flowcharts to grade CEs and tested the validity and consistency of this approach in a series of patients. METHODS The Common Terminology Criteria for Adverse Events version 3.0 and 4.0 were used to define the CEs. Forty patients were randomly selected from a cohort of 72 subjects with known CEs that had been graded by a physician for an earlier study. To establish whether the new method was valid for appropriate grading, a non-physician graded the CEs by using the new method. To evaluate consistency of the grading, the same charts were graded again by two other non-physicians, one with receiving brief introduction and one with receiving extensive training on the new method. We calculated weighted Kappa statistics to quantify inter-observer agreement. RESULTS The inter-observer agreement was 0.92 (95% CI 0.80-1.00) for validity, and 0.88 (0.79-0.98) and 0.99 (0.96-1.00) for consistency with the outcome assessors who had the brief introduction and the extensive training, respectively. CONCLUSIONS The newly developed standardized method to grade CEs using data from medical records has shown excellent validity and consistency. The study showed that the method can be correctly applied by researchers without a medical background, provided that they receive adequate training.
Resumo:
OBJECTIVES To assess the diagnostic value of panoramic views (2D) of patients with impacted maxillary canines by a group of trained orthodontists and oral surgeons, and to quantify the subjective need and reasons for further three-dimensional (3D) imaging. MATERIALS AND METHODS The study comprises 60 patients with panoramic radiographs (2D) and cone beam computed tomography (CBCT) scans (3D), and a total of 72 impacted canines. Data from a standardized questionnaire were compared within (intragroup) and between (intergroup) a group of orthodontists and oral surgeons to assess possible correlations and differences. Furthermore, the questionnaire data were compared with the findings from the CBCT scans to estimate the correlation within and between the two specialties. Finally, the need and reasons for further 3D imaging was analysed for both groups. RESULTS When comparing questionnaire data with the analysis of the respective CBCT scans, orthodontists showed probability (Pr) values ranging from 0.443 to 0.943. Oral surgeons exhibited Pr values from 0.191 to 0.946. Statistically significant differences were found for the labiopalatal location of the impacted maxillary canine (P = 0.04), indicating a higher correlation in the orthodontist group. The most frequent reason mentioned for the further need of 3D analysis was the labiopalatal location of the impacted canines. Oral surgeons were more in favour of performing further 3D imaging (P = 0.04). CONCLUSIONS Orthodontists were more likely to diagnose the exact labiopalatal position of impacted maxillary canines when using panoramic views only. Generally, oral surgeons more often indicated the need for further 3D imaging.
Resumo:
The scientific literature of laboratory animal research is replete with papers reporting poor reproducibility of results as well as failure to translate results to clinical trials in humans. This may stem in part from poor experimental design and conduct of animal experiments. Despite widespread recognition of these problems and implementation of guidelines to attenuate them, a review of the literature suggests that experimental design and conduct of laboratory animal research are still in need of refinement. This paper will review and discuss possible sources of biases, highlight advantages and limitations of strategies proposed to alleviate them, and provide a conceptual framework for improving the reproducibility of laboratory animal research.