907 resultados para Criteria of election
Resumo:
A new species of Sanguinicola Plehn, 1905 is described from the marine teleosts Notolabrus parilus (Richardson) and N. tetricus (Richardson) (Perciformes: Labridae) from Western Australian and Tasmanian waters. This host distribution is strikingly anomalous; however, the present material fulfils the morphological criteria of Sanguinicola. S. maritimus n. sp. differs from previously described species in having the combination of a body 1,432-1,701 mu m long, the oesophagus 18.3-21.7% of the body length, the testis occupying 42.8-52.3% of the body length, an oviducal seminal receptacle and Mehlis' gland present, ovoid eggs, and vitelline follicles that extend anteriorly past the nerve commissure, laterally past the lateral nerve chords and posteriorly to the anterior margin of the cirrus-sac. S. maritimus also lacks a protrusible anterior proboscis. It also differs in the combination of host and geographical location, being the first Sanguinicola species from a marine teleost and the first from Australian waters.
Resumo:
Purpose: The effectiveness of synchronous carboplatin, etoposide, and radiation therapy in improving survival was evaluated by comparison of a matched set of historic control subjects with patients treated in a prospective Phase II study that used synchronous chemotherapy and radiation and adjuvant chemotherapy. Patients and Methods: Patients were included in the analysis if they had disease localized to the primary site and nodes, and they were required to have at least one of the following high-risk features: recurrence after initial therapy, involved nodes, primary size greater than 1 cm, or gross residual disease after surgery. All patients who received chemotherapy were treated in a standardized fashion as part of a Phase II study (Trans-Tasman Radiation Oncology Group TROG 96:07) from 1997 to 2001. Radiation was delivered to the primary site and nodes to a dose of 50 Gy in 25 fractions over 5 weeks, and synchronous carboplatin (AUC 4.5) and etoposide, 80 mg/m(2) i.v. on Days 1 to 3, were given in Weeks 1, 4, 7, and 10. The historic group represents a single institution's experience from 1988 to 1996 and was treated with surgery and radiation alone, and patients were included if they fulfilled the eligibility criteria of TROG 96:07. Patients with occult cutaneous disease were not included for the purpose of this analysis. Because of imbalances in the prognostic variables between the two treatment groups, comparisons were made by application of Cox's proportional hazard modeling. Overall survival, disease-specific survival, locoregional control, and distant control were used as endpoints for the study. Results: Of the 102 patients who had high-risk Stage I and II disease, 40 were treated with chemotherapy (TROG 96:07) and 62 were treated without chemotherapy (historic control subjects). When Cox's proportional hazards modeling was applied, the only significant factors for overall survival were recurrent disease, age, and the presence of residual disease. For disease-specific survival, recurrent disease was the only significant factor. Primary site on the lower limb had an adverse effect on locoregional control. For distant control, the only significant factor was residual disease. Conclusions: The multivariate analysis suggests chemotherapy has no effect on survival, but because of the wide confidence limits, a chemotherapy effect cannot be excluded. A study of this size is inadequately powered to detect small improvements in survival, and a larger randomized study remains the only way to truly confirm whether chemotherapy improves the results in high-risk MCC. (c) 2006 Elsevier Inc.
Resumo:
Objective: To compare the total plasma cortisol values obtained from three widely used immunoassays and a high pressure liquid chromatography (HPLC) technique on samples obtained from patients with sepsis. Design and setting: Observational interventional in the general intensive care unit of a metropolitan hospital. Patients and participants: Patients admitted to the intensive care unit with a diagnosis of sepsis and fulfilling criteria of systemic inflammatory response syndrome. Interventions: Standard short synacthen test performed with 250 mu g cosyntropin. Measurements and results: Two of the three immunoassays returned results significantly higher than those obtained by HPLC: Immulite by 95% (95%CI 31-188%) and TDx by 79% (21-165%). The limits of agreement for all three immunoassays with HPLC ranged from -62% to 770%. In addition, by classifying the patients into responders and non-responders to ACTH by standard criteria there was concordance in all assays in only 44% of patients. Conclusions: Immunoassay estimation of total plasma cortisol in septic patients shows wide assay related variation that may have significant impact in the diagnosis of relative adrenal insufficiency.
Resumo:
Benchmarking of the performance of states, provinces, or districts in a decentralised health system is important for fostering of accountability, monitoring of progress, identification of determinants of success and failure, and creation of a culture of evidence. The Mexican Ministry of Health has, since 2001, used a benchmarking approach based on the World Health Organization (WHO) concept of effective coverage of an intervention, which is defined as the proportion of potential health gain that could be delivered by the health system to that which is actually delivered. Using data collection systems, including state representative examination surveys, vital registration, and hospital discharge registries, we have monitored the delivery of 14 interventions for 2005-06. Overall effective coverage ranges from 54.0% in Chiapas, a poor state, to 65.1% in the Federal District. Effective coverage for maternal and child health interventions is substantially higher than that for interventions that target other health problems. Effective coverage for the lowest wealth quintile is 52% compared with 61% for the highest quintile. Effective coverage is closely related to public-health spending per head across states; this relation is stronger for interventions that are not related to maternal and child health than those for maternal and child health. Considerable variation also exists in effective coverage at similar amounts of spending. We discuss the implications of these issues for the further development of the Mexican health-information system. Benchmarking of performance by measuring effective coverage encourages decision-makers to focus on quality service provision, not only service availability. The effective coverage calculation is an important device for health-system stewardship. In adopting this approach, other countries should select interventions to be measured on the basis of the criteria of affordability, effect on population health, effect on health inequalities, and capacity to measure the effects of the intervention. The national institutions undertaking this benchmarking must have the mandate, skills, resources, and independence to succeed.
Resumo:
This thesis presents an examination of the factors which influence the performance of eddy-current machines and the way in which they affect optimality of those machines. After a brief introduction to the types of eddy-current machine considered, the applications to which these machines are put are examined. A list of parameters by which to assess their performance is obtained by considering the machine as part of a system. in this way an idea of what constitutes an optimal machine is obtained. The third chapter then identifies the factors which affects the performance and makes a quantitative evaluation of the effect. Here the various alternative configurations and components are compared with regard to their influence on the mechanical, electromagnetic, and thermal performance criteria of the machine. Chapter four contains a brief review of the methods of controlling eddy-current machines by electronic methods using thyristors or transistors as the final control element. Where necessary, the results of previous workers in the field of electrical machines have been extended or adapted to increase the usefulness of this thesis.
Resumo:
Enterprise Risk Management (ERM) and Knowledge Management (KM) both encompass top-down and bottom-up approaches developing and embedding risk knowledge concepts and processes in strategy, policies, risk appetite definition, the decision-making process and business processes. The capacity to transfer risk knowledge affects all stakeholders and understanding of the risk knowledge about the enterprise's value is a key requirement in order to identify protection strategies for business sustainability. There are various factors that affect this capacity for transferring and understanding. Previous work has established that there is a difference between the influence of KM variables on Risk Control and on the perceived value of ERM. Communication among groups appears as a significant variable in improving Risk Control but only as a weak factor in improving the perceived value of ERM. However, the ERM mandate requires for its implementation a clear understanding, of risk management (RM) policies, actions and results, and the use of the integral view of RM as a governance and compliance program to support the value driven management of the organization. Furthermore, ERM implementation demands better capabilities for unification of the criteria of risk analysis, alignment of policies and protection guidelines across the organization. These capabilities can be affected by risk knowledge sharing between the RM group and the Board of Directors and other executives in the organization. This research presents an exploratory analysis of risk knowledge transfer variables used in risk management practice. A survey to risk management executives from 65 firms in various industries was undertaken and 108 answers were analyzed. Potential relationships among the variables are investigated using descriptive statistics and multivariate statistical models. The level of understanding of risk management policies and reports by the board is related to the quality of the flow of communication in the firm and perceived level of integration of the risk policy in the business processes.
Resumo:
Two key issues defined the focus of this research in manufacturing plasmid DNA for use In human gene therapy. First, the processing of E.coli bacterial cells to effect the separation of therapeutic plasmid DNA from cellular debris and adventitious material. Second, the affinity purification of the plasmid DNA in a Simple one-stage process. The need arises when considering the concerns that have been recently voiced by the FDA concerning the scalability and reproducibility of the current manufacturing processes in meeting the quality criteria of purity, potency, efficacy, and safety for a recombinant drug substance for use in humans. To develop a preliminary purification procedure, an EFD cross-flow micro-filtration module was assessed for its ability to effect the 20-fold concentration, 6-time diafiltration, and final clarification of the plasmid DNA from the subsequent cell lysate that is derived from a 1 liter E.coli bacterial cell culture. Historically, the employment of cross-flow filtration modules within procedures for harvesting cells from bacterial cultures have failed to reach the required standards dictated by existing continuous centrifuge technologies, frequently resulting in the rapid blinding of the membrane with bacterial cells that substantially reduces the permeate flux. By challenging the EFD module, containing six helical wound tubular membranes promoting centrifugal instabilities known as Dean vortices, with distilled water between the Dean number's of 187Dn and 818Dn,and the transmembrane pressures (TMP) of 0 to 5 psi. The data demonstrated that the fluid dynamics significantly influenced the permeation rate, displaying a maximum at 227Dn (312 Imh) and minimum at 818Dn (130 Imh) for a transmembrane pressure of 1 psi. Numerical studies indicated that the initial increase and subsequent decrease resulted from a competition between the centrifugal and viscous forces that create the Dean vortices. At Dean numbers between 187Dn and 227Dn , the forces combine constructively to increase the apparent strength and influence of the Dean vortices. However, as the Dean number in increases above 227 On the centrifugal force dominates the viscous forces, compressing the Dean vortices into the membrane walls and reducing their influence on the radial transmembrane pressure i.e. the permeate flux reduced. When investigating the action of the Dean vortices in controlling tile fouling rate of E.coli bacterial cells, it was demonstrated that the optimum cross-flow rate at which to effect the concentration of a bacterial cell culture was 579Dn and 3 psi TMP, processing in excess of 400 Imh for 20 minutes (i.e., concentrating a 1L culture to 50 ml in 10 minutes at an average of 450 Imh). The data demonstrated that there was a conflict between the Dean number at which the shear rate could control the cell fouling, and the Dean number at which tile optimum flux enhancement was found. Hence, the internal geometry of the EFD module was shown to sub-optimal for this application. At 579Dn and 3 psi TMP, the 6-fold diafiltration was shown to occupy 3.6 minutes of process time, processing at an average flux of 400 Imh. Again, at 579Dn and 3 psi TMP the clarification of the plasmid from tile resulting freeze-thaw cell lysate was achieved at 120 Iml1, passing 83% (2,5 mg) of the plasmid DNA (6,3 ng μ-1 10.8 mg of genomic DNA (∼23,00 Obp, 36 ng μ-1 ), and 7.2 mg of cellular proteins (5-100 kDa, 21.4 ngμ-1 ) into the post-EFD process stream. Hence the EFD module was shown to be effective, achieving the desired objectives in approximately 25 minutes. On the basis of its ability to intercalate into low molecular weight dsDNA present in dilute cell lysates, and be electrophoresed through agarose, the fluorophore PicoGreen was selected for the development of a suitable dsDNA assay. It was assesseel for its accuracy, and reliability, In determining the concentration and identity of DNA present in samples that were eleclrophoresed through agarose gels. The signal emitted by intercalated PicoGreen was shown to be constant and linear, and that the mobility of the PicaGreen-DNA complex was not affected by the intercalation. Concerning the secondary purification procedure, various anion-exchange membranes were assessed for their ability to capture plasmid DNA from the post-EFD process stream. For a commercially available Sartorius Sartobind Q15 membrane, the reduction in the equilibriumbinding capacity for ctDNA in buffer of increasing ionic demonstrated that DNA was being.adsorbed by electrostatic interactions only. However, the problems associated with fluid distribution across the membrane demonstrated that the membrane housing was the predominant cause of the .erratic breakthrough curves. Consequently, this would need to be rectified before such a membrane could be integrated into the current system, or indeed be scaled beyond laboratory scale. However, when challenged with the process material, the data showed that considerable quantities of protein (1150 μg) were adsorbed preferentially to the plasmid DNA (44 μg). This was also shown for derived Pall Gelman UltraBind US450 membranes that had been functionalised by varying molecular weight poly-L~lysine and polyethyleneimine ligands. Hence the anion-exchange membranes were shown to be ineffective in capturing plasmid DNA from the process stream. Finally, work was performed to integrate a sequence-specific DNA·binding protein into a single-stage DNA chromatography, isolating plasmid DNA from E.coli cells whilst minimising the contamination from genomic DNA and cellular protein. Preliminary work demonstrated that the fusion protein was capable of isolating pUC19 DNA into which the recognition sequence for the fusion-protein had been inserted (pTS DNA) when in the presence of the conditioned process material. Althougth the pTS recognition sequence differs from native pUC19 sequences by only 2 bp, the fusion protein was shown to act as a highly selective affinity ligand for pTS DNA alone. Subsequently, the scale of the process was scaled 25-fold and positioned directly following the EFD system. In conclusion, the integration of the EFD micro-filtration system and zinc-finger affinity purification technique resulted in the capture of approximately 1 mg of plasmid DNA was purified from 1L of E.coli culture in a simple two stage process, resulting in the complete removal of genomic DNA and 96.7% of cellular protein in less than 1 hour of process time.
Resumo:
Bone is the second most widely transplanted tissue after blood. Synthetic alternatives are needed that can reduce the need for transplants and regenerate bone by acting as active temporary templates for bone growth. Bioactive glasses are one of the most promising bone replacement/regeneration materials because they bond to existing bone, are degradable and stimulate new bone growth by the action of their dissolution products on cells. Sol-gel-derived bioactive glasses can be foamed to produce interconnected macropores suitable for tissue ingrowth, particularly cell migration and vascularization and cell penetration. The scaffolds fulfil many of the criteria of an ideal synthetic bone graft, but are not suitable for all bone defect sites because they are brittle. One strategy for improving toughness of the scaffolds without losing their other beneficial properties is to synthesize inorganic/organic hybrids. These hybrids have polymers introduced into the sol-gel process so that the organic and inorganic components interact at the molecular level, providing control over mechanical properties and degradation rates. However, a full understanding of how each feature or property of the glass and hybrid scaffolds affects cellular response is needed to optimize the materials and ensure long-term success and clinical products. This review focuses on the techniques that have been developed for characterizing the hierarchical structures of sol-gel glasses and hybrids, from atomicscale amorphous networks, through the covalent bonding between components in hybrids and nanoporosity, to quantifying open macroporous networks of the scaffolds. Methods for non-destructive in situ monitoring of degradation and bioactivity mechanisms of the materials are also included. © 2012 The Royal Society.
Resumo:
The work utilising a new material for contact lenses has fallen into three parts: Physioloeical considerations: Since the cornea is devoid of blood vessels, its oxygen is derived from the atmosphere. Early hydrophilic gel contact lenses interrupted the flow of oxygen and corneal insult resulted. Three techniques of fenestration were tried to overcome this problem. High speed drilling with 0.1 mm diameter twist drills. was found to be mechanically successful, but under clinical conditions mucous blockage of the fenestrations occurred. An investigation was made into the amount of oxygen arriving at the corneal interface; related to gel lens thickness. The results indicated an improvement in corneal oxygen as lens thickness was reduced. The mechanism is thought to be a form of mechanical pump. A series of clinical studies con:firmed the experimental work; the use of thin lenses removing the symptoms of corneal hypoxia. Design: The parameters of lens back curvature. lens thickness and lens diameter have been isolated and related to three criteria of vision (a) Visual acuity. (b) Visual stability and (c) Induced astigmatism. From the results achieved a revised and basically successful design of lens has been developed. Comparative study: The developed form of lens was compared with traditional lenses in a controlled survey. Twelve factors were assessed over a twenty week period of wear using a total of eighty four patients. The results of this study indicate that whilst the expected changes were noted with the traditional lens wearers, gel lens wearers showed no discernible change in any of the factors measured. ldth the exception of' one parameter. In addition to a description of' the completed l'iork. further investigations are ·sug~ested l'lhich. it is hoped. l'iould further improve the optical performance of gel lenses.
Resumo:
Purpose - Measurements obtained from the right and left eye of a subject are often correlated whereas many statistical tests assume observations in a sample are independent. Hence, data collected from both eyes cannot be combined without taking this correlation into account. Current practice is reviewed with reference to articles published in three optometry journals, viz., Ophthalmic and Physiological Optics (OPO), Optometry and Vision Science (OVS), Clinical and Experimental Optometry (CEO) during the period 2009–2012. Recent findings - Of the 230 articles reviewed, 148/230 (64%) obtained data from one eye and 82/230 (36%) from both eyes. Of the 148 one-eye articles, the right eye, left eye, a randomly selected eye, the better eye, the worse or diseased eye, or the dominant eye were all used as selection criteria. Of the 82 two-eye articles, the analysis utilized data from: (1) one eye only rejecting data from the adjacent eye, (2) both eyes separately, (3) both eyes taking into account the correlation between eyes, or (4) both eyes using one eye as a treated or diseased eye, the other acting as a control. In a proportion of studies, data were combined from both eyes without correction. Summary - It is suggested that: (1) investigators should consider whether it is advantageous to collect data from both eyes, (2) if one eye is studied and both are eligible, then it should be chosen at random, and (3) two-eye data can be analysed incorporating eyes as a ‘within subjects’ factor.
Resumo:
In 1998 the Accounting Standards Board (ASB) published FRS 13, ‘Derivatives and other Financial Instruments: Disclosures’. This laid down the requirements for disclosures of an entity’s policies, objectives and strategies in using financial instruments, their impact on its risk, performance and financial condition, and details of how risks are managed. FRS 13 became effective in March 1999, and this paper uses the 1999 annual reports of UK banks to evaluate the usefulness of disclosures from a user’s perspective. Usefulness is measured in terms of the criteria of materiality, relevance, reliability, comparability and understandability as defined in the ASB’s Statement of Principles (ASB, 1999). Our findings suggest that the narrative disclosures are generic in nature, the numerical data incomplete and not always comparable, and that it is difficult for the user to combine both narrative and numerical information in order to assess the banks’ risk profile. Our overall conclusion is therefore that current UK financial reporting practices are of limited help to users wishing to assess the scale of an institution’s financial risk exposure.
Resumo:
The question of forming aim-oriented description of an object domain of decision support process is outlined. Two main problems of an estimation and evaluation of data and knowledge uncertainty in decision support systems – straight and reverse, are formulated. Three conditions being the formalized criteria of aimoriented constructing of input, internal and output spaces of some decision support system are proposed. Definitions of appeared and hidden data uncertainties on some measuring scale are given.
Resumo:
Based on a robust analysis of the existing literature on performance appraisal (PA), this paper makes a case for an integrated framework of effectiveness of performance appraisal (EPA). To achieve this, it draws on the expanded view of measurement criteria of EPA, i.e. purposefulness, fairness and accuracy, and identifies their relationships with ratee reactions. The analysis reveals that the expanded view of purposefulness includes more theoretical anchors for the purposes of PA and relates to various aspects of human resource functions, e.g. feedback and goal orientation. The expansion in the PA fairness criterion suggests certain newly established nomological networks, which were ignored in the past, e.g. the relationship between distributive fairness and organization-referenced outcomes. Further, refinements in PA accuracy reveal a more comprehensive categorization of rating biases. Coherence among measurement criteria has resulted in a ratee reactions-based integrated framework, which should be useful for both researchers and practitioners.
Resumo:
Membership in well-structured teams, which show clarity in team and individual goals, meet regularly, and recognize diverse skills of their members, is known to reduce stress. This study examined how membership of well-structured teams was associated with lower levels of strain, when testing a work stressors-to-strains relationship model across the three levels of team structure, namely well-structured, poorly structured (do not fulfill all the criteria of well-structured teams) and no team. The work stressors tested, were quantitative overload and hostile environment, whereas strains were measured through job satisfaction and intention to leave job. This investigation was carried out on a random sample of 65,142 respondents in acute/specialist National Health Service hospitals across the UK. Using multivariate analysis of variance, statistically significant differences between means across the three groups of team structure, with mostly moderate effect sizes, were found for the study variables. Those in well-structured teams have the highest levels of job satisfaction and the least intention to leave job. Multigroup structural equation modelling confirmed the model's robustness across the three groups of team structure. Work stressors explained 45%, 50% and 65% of the variance of strains for well-structured, poorly structured and no team membership, respectively. An increase of one standard deviation in work stressors, resulted in an increase in 0.67, 0.70 and 0.81 standard deviations in strains for well-structured, poorly structured and no team membership, respectively. This investigation is an eye-opener for hospitals to work towards achieving well-structured teams, as this study shows weaker stressor-to-strain relationships for members of these teams.
Resumo:
A cikk bemutatja, hogy az emissziós jogok mérleg- és beszámoló-képességi kritériumai milyen leképezést tesznek lehetővé a jelenleg érvényes Nemzetközi Pénzügyi Beszámolási Standardokban (IFRS, International Financial Reporting Standards). A vizsgálat fókuszában az üzemeltető áll, aki az Európai Unió kibocsátás-kereskedelmi rendszerének hatálya alá tartozik, azaz ipari tevékenysége folytán szén-dioxiddal szennyezi a Föld légterét. Az üzemeltető mint az emissziós jog tulajdonosa jelenik meg. A cikk megvizsgálja mindazokat a folyamatokat, melynek eredményeképpen birtokolhatja ezeket az egységeket, valamint azt, hogy az IFRS-ek milyen lehetőséget nyújtanak a különböző forrásból származó jogosultságok értékelésére. / === / The author presents that accounting and report ability criteria of the emission rights what mapping allows in the current International Financial Standards. The study focuses on the operator, who is the subject of the European Union Emissions Trading Scheme, whose industrial activities pollute with carbon dioxide to the earth’s atmosphere. The operator, as the owner of emission rights is displayed. This article examines those processes, which resulted in these units may be owned, and that what possibility is provided by IFRSs to evaluate rights from different sources.