988 resultados para capability evaluation
Resumo:
The purpose of this study was to evaluate the determinism of the AS-lnterface network and the 3 main families of control systems, which may use it, namely PLC, PC and RTOS. During the course of this study the PROFIBUS and Ethernet field level networks were also considered in order to ensure that they would not introduce unacceptable latencies into the overall control system. This research demonstrated that an incorrectly configured Ethernet network introduces unacceptable variable duration latencies into the control system, thus care must be exercised if the determinism of a control system is not to be compromised. This study introduces a new concept of using statistics and process capability metrics in the form of CPk values, to specify how suitable a control system is for a given control task. The PLC systems, which were tested, demonstrated extremely deterministic responses, but when a large number of iterations were introduced in the user program, the mean control system latency was much too great for an AS-I network. Thus the PLC was found to be unsuitable for an AS-I network if a large, complex user program Is required. The PC systems, which were tested were non-deterministic and had latencies of variable duration. These latencies became extremely exaggerated when a graphing ActiveX was included in the control application. These PC systems also exhibited a non-normal frequency distribution of control system latencies, and as such are unsuitable for implementation with an AS-I network. The RTOS system, which was tested, overcame the problems identified with the PLC systems and produced an extremely deterministic response, even when a large number of iterations were introduced in the user program. The RTOS system, which was tested, is capable of providing a suitable deterministic control system response, even when an extremely large, complex user program is required.
Resumo:
PURPOSE: To determine the local control and complication rates for children with papillary and/or macular retinoblastoma progressing after chemotherapy and undergoing stereotactic radiotherapy (SRT) with a micromultileaf collimator. METHODS AND MATERIALS: Between 2004 and 2008, 11 children (15 eyes) with macular and/or papillary retinoblastoma were treated with SRT. The mean age was 19 months (range, 2-111). Of the 15 eyes, 7, 6, and 2 were classified as International Classification of Intraocular Retinoblastoma Group B, C, and E, respectively. The delivered dose of SRT was 50.4 Gy in 28 fractions using a dedicated micromultileaf collimator linear accelerator. RESULTS: The median follow-up was 20 months (range, 13-39). Local control was achieved in 13 eyes (87%). The actuarial 1- and 2-year local control rates were both 82%. SRT was well tolerated. Late adverse events were reported in 4 patients. Of the 4 patients, 2 had developed focal microangiopathy 20 months after SRT; 1 had developed a transient recurrence of retinal detachment; and 1 had developed bilateral cataracts. No optic neuropathy was observed. CONCLUSIONS: Linear accelerator-based SRT for papillary and/or macular retinoblastoma in children resulted in excellent tumor control rates with acceptable toxicity. Additional research regarding SRT and its intrinsic organ-at-risk sparing capability is justified in the framework of prospective trials.
Resumo:
In order to improve the specificity and sensitivity of the techniques for the human anisakidosis diagnosis, a method of affinity chromatography for the purification of species-specific antigens from Anisakis simplex third-stage larvae (L3) has been developed. New Zealand rabbits were immunized with A. simplex or Ascaris suum antigens or inoculated with Toxocara canis embryonated eggs. The IgG specific antibodies were isolated by means of protein A-Sepharose CL-4B beads columns. IgG anti-A. simplex and -A. suum were coupled to CNBr-activated Sepharose 4B. For the purification of the larval A. simplex antigens, these were loaded into the anti-A. simplex column and bound antigens eluted. For the elimination of the epitopes responsible for the cross-reactions, the A. simplex specific proteins were loaded into the anti-A. suum column. To prove the specificity of the isolated proteins, immunochemical analyses by polyacrylamide gel electrophoresis were carried out. Further, we studied the different responses by ELISA to the different antigenic preparations of A. simplex used, observing their capability of discriminating among the different antisera raised in rabbits (anti-A. simplex, anti-A. suum, anti-T. canis). The discriminatory capability with the anti-T. canis antisera was good using the larval A. simplex crude extract (CE) antigen. When larval A. simplex CE antigen was loaded into a CNBr-activated Sepharose 4B coupled to IgG from rabbits immunized with A. simplex CE antigen, its capability for discriminate between A. simplex and A. suum was improved, increasing in the case of T. canis. The best results were obtained using larval A. simplex CE antigen loaded into a CNBr-activated Sepharose 4B coupled to IgG from rabbits immunized with adult A. suum CE antigen. When we compared the different serum dilution and antigenic concentration, we selected the working serum dilution of 1/400 and 1 µg/ml of antigenic concentration.
Resumo:
As part of the ACuteTox project aimed at the development of non-animal testing strategies for predicting human acute oral toxicity, aggregating brain cell cultures (AGGR) were examined for their capability to detect organ-specific toxicity. Previous multicenter evaluations of in vitro cytotoxicity showed that some 20% of the tested chemicals exhibited significantly lower in vitro toxicity as expected from in vivo toxicity data. This was supposed to be due to toxicity at supracellular (organ or system) levels. To examine the capability of AGGR to alert for potential organ-specific toxicants, concentration-response studies were carried out in AGGR for 86 chemicals, taking as endpoints the mRNA expression levels of four selected genes. The lowest observed effect concentration (LOEC) determined for each chemical was compared with the IC20 reported for the 3T3/NRU cytotoxicity assay. A LOEC lower than IC20 by at least a factor of 5 was taken to alert for organ-specific toxicity. The results showed that the frequency of alerts increased with the level of toxicity observed in AGGR. Among the chemicals identified as alert were many compounds known for their organ-specific toxicity. These findings suggest that AGGR are suitable for the detection of organ-specific toxicity and that they could, in conjunction with the 3T3/NRU cytotoxicity assay, improve the predictive capacity of in vitro toxicity testing.
Resumo:
Thanks to decades of research, gait analysis has become an efficient tool. However, mainly due to the price of the motion capture systems, standard gait laboratories have the capability to measure only a few consecutive steps of ground walking. Recently, wearable systems were proposed to measure human motion without volume limitation. Although accurate, these systems are incompatible with most of existing calibration procedures and several years of research will be necessary for their validation. A new approach consisting of using a stationary system with a small capture volume for the calibration procedure and then to measure gait using a wearable system could be very advantageous. It could benefit from the knowledge related to stationary systems, allow long distance monitoring and provide new descriptive parameters. The aim of this study was to demonstrate the potential of this approach. Thus, a combined system was proposed to measure the 3D lower body joints angles and segmental angular velocities. It was then assessed in terms of reliability towards the calibration procedure, repeatability and concurrent validity. The dispersion of the joint angles across calibrations was comparable to those of stationary systems and good reliability was obtained for the angular velocities. The repeatability results confirmed that mean cycle kinematics of long distance walks could be used for subjects' comparison and pointed out an interest for the variability between cycles. Finally, kinematics differences were observed between participants with different ankle conditions. In conclusion, this study demonstrated the potential of a mixed approach for human movement analysis.
Resumo:
Objectives: Therapeutic drug monitoring (TDM) aims at optimizing treatment by individualizing dosage regimen based on blood concentrations measurement. Maintaining concentrations within a target range requires pharmacokinetic (PK) and clinical capabilities. Bayesian calculation represents a gold standard in TDM approach but requires computing assistance. The aim of this benchmarking was to assess and compare computer tools designed to support TDM clinical activities.¦Methods: Literature and Internet were searched to identify software. Each program was scored against a standardized grid covering pharmacokinetic relevance, user-friendliness, computing aspects, interfacing, and storage. A weighting factor was applied to each criterion of the grid to consider its relative importance. To assess the robustness of the software, six representative clinical vignettes were also processed through all of them.¦Results: 12 software tools were identified, tested and ranked. It represents a comprehensive review of the available software characteristics. Numbers of drugs handled vary from 2 to more than 180, and integration of different population types is available for some programs. Nevertheless, 8 programs offer the ability to add new drug models based on population PK data. 10 computer tools incorporate Bayesian computation to predict dosage regimen (individual parameters are calculated based on population PK models). All of them are able to compute Bayesian a posteriori dosage adaptation based on a blood concentration while 9 are also able to suggest a priori dosage regimen, only based on individual patient covariates. Among those applying Bayesian analysis, MM-USC*PACK uses a non-parametric approach. The top 2 programs emerging from this benchmark are MwPharm and TCIWorks. Others programs evaluated have also a good potential but are less sophisticated or less user-friendly.¦Conclusions: Whereas 2 software packages are ranked at the top of the list, such complex tools would possibly not fit all institutions, and each program must be regarded with respect to individual needs of hospitals or clinicians. Programs should be easy and fast for routine activities, including for non-experienced users. Although interest in TDM tools is growing and efforts were put into it in the last years, there is still room for improvement, especially in terms of institutional information system interfacing, user-friendliness, capability of data storage and automated report generation.
Resumo:
Quality management concrete allows the contractor to develop the mix design for the portland cement concrete. This research was initiated to gain knowledge about contractor mix designs. An experiment was done to determine the variation in cylinders, beams, and cores that could be used to test the strength of the contractor's mix. In addition, the contractor's cylinder strengths and gradations were analyzed for statistical stability and process capability. This research supports the following conclusions: (1) The mold type used to cast the concrete cylinders had an effect on the compressive strength of the concrete. The 4.5-in. by 9-in. (11.43-cm by 22.86-cm) cylinders had lower strength at a 95% confidence interval than the 4-in. by 8-in. (10.16-cm by 20.32-cm) and 6-in. by 12-in. (15.24-cm by 30.48-cm) cylinders. (2) The low vibration consolidation effort had the lowest strength of the three consolidation efforts. In particular, an interaction occurred between the low vibration effort and the 4.5-in. by 9-in. (11.43-cm by 22.86-cm) mold. This interaction produced very low compressive strengths when compared with the other consolidation efforts. (3) A correlation of 0.64 R-squared was found between the 28 day cylinder and 28 day compressive strengths. (4) The compressive strength results of the process control testing were not in statistical control. The aggregate gradations were mostly in statistical control. The gradation process was capable of meeting specification requirements. However, many of the sieves were off target. (5) The fineness modulus of the aggregate gradations did not correlate well with the strength of the concrete. However, this is not surprising considering that the gradation tests and the strength tests did not represent the same material. In addition, the concrete still has many other variables that will affect its strength that were not controlled.
Resumo:
The Iowa Department of Transportation (DOT) evaluated the PAS I Road Survey System from PAVEDEX, Inc. of Spokane, Washington. This system uses video photograph to identify and quantify pavement cracking and patching distresses. Comparisons were made to procedures currently used in the State. Interstate highway, county roads and city streets, and two shoulder sections were evaluated. Variables included travel speeds, surface type and texture, and traffic control conditions. Repeatability and distress identification were excellent on rigid pavements. Differences in distress identification and the effect of surface textures in the flexible test sections limited the repeatability and correlation of data to that of the Iowa DOT method. Cost data indicates that PAVEDEX is capable of providing comparable results with improved accuracy at a reasonable cost, but in excess of that experienced currently by the Iowa DOT. PAVEDEX is capable of providing network level pavement condition data at highway speeds and analysis of the data to identify 1/8-inch cracks at approximately 2-3 lane miles per hour with manual evaluation. Photo-logging capability is also included in the unit.
Resumo:
The Iowa Method for bridge deck overlays has been very successful in Iowa since its adoption in the 1970s. This method involves removal of deteriorated portions of a bridge deck followed by placement of a layer of dense (Type O) Portland Cement Concrete (PCC). The challenge encountered with this type of bridge deck overlay is that the PCC must be mixed on-site, brought to the placement area and placed with specialized equipment. This adds considerably to the cost and limits contractor selection, because not all contractors have the capability or equipment required. If it is possible for a ready-mix supplier to manufacture and deliver a dense PCC to the grade, then any competent bridge deck contractor would be able to complete the job. However, Type O concrete mixes are very stiff and generally cannot be transported and placed with ready-mix type trucks. This is where a “super-plasticizer” comes in to use. Addition of this admixture provides a substantial increase in the workability of the concrete – to the extent that it can be delivered to the site and placed on the deck directly out of a ready-mix truck. The objective of this research was to determine the feasibility of placing a deck overly of this type on county bridges within the limits of county budgets and workforce/contractor availability.
Resumo:
Työn tavoitteena oli arvioida ja selvittää toimittajasuhteeseen vaikuttavia tekijöitä JOT Automation Group Oyj.n ja sen alihankkijoiden välisessä yhteistyössä ja muodostaa yrityksen kilpailukykyä parantava toimittaja-arviointiprosessi. Työssä keskityttiin tarkastelemaan yleisillä materiaali- ja komponenttimarkkinoilla toimivia toimittajia elektroniikkateollisuuden tuotantojärjestelmien valmistuksessa. Ensin tutustuttiin toimittajasuhdetta ja sen arviointia käsitelleeseen kirjallisuuteen. Teorian tueksi tehtiin haastatteluja ja kartoitettiin ensisijaisia tarpeita ja tavoitteita arviointiprosessille. Valmis prosessi testattiin käytännössä kahden eri case-esimerkin avulla. Prosessista muodostui kahteen eri työkaluun jakautunut kokonaisuus, joista auditointi arvioi toimittajan kyvykkyyttä vastatta sille asetettuihin vaatimuksiin. Toimittajan suorituskyvyn mittaaminen puolestaan testaa ja vertaa jatkuvasti toiminnan todellista tasoa auditoinnissa saatuihin tuloksiin. Työ sisältää selvityksen ja ohjeistuksen toimittaja-arviointiprosessin käytöstä. Prosessin käyttö alentaa toimittajaan kohdistuvaa materiaalien saatavuuteen ja hankintaan liittyviä riskejä. Esimerkeistä saadut kokemukset osoittivat, että prosessin avulla päästään pureutumaan tärkeisiin ydinalueisiin ja kehittämään niitä sekä toimittajalle, että ostajayritykselle edullisella tavalla. Toimittaja-arviointiprosessista kehittyy toimintatapa yrityksen ja sen toimittajan välisen suhteen ylläpitämiseksi.
Resumo:
Organizations gain resources, skills and technologies to find out the ultimate mix of capabilities to be a winner in the competitive market. These are all important factors that need to be taken into account in organizations operating in today's business environment. So far, there are no significant studies on the organizational capabilities in the field of PSM. The literature review shows that the PSM capabilities need to be studied more comprehensively. This study attempts to reveal and fill this gap by providing the PSM capability matrix that identifies the key PSM capabilities approached from two angles: there are three primary PSM capabilities and nine subcapabilities and, moreover, the individual and organizational PSM capabilities are identified and evaluated. The former refers to the PSM capability matrix of this study which is based on the strategic and operative PSM capabilities that complement the economic ones, while the latter relates to the evaluation of the PSM capabilities, such as the buyer profiles of individual PSM capabilities and the PSMcapability map of the organizational ones. This is a constructive case study. The aim is to define what the purchasing and supply management capabilities are and how they can be evaluated. This study presents a PSM capability matrix to identify and evaluate the capabilities to define capability gaps by comparing the ideal level of PSM capabilities to the realized ones. The research questions are investigated with two case organizations. This study argues that PSM capabilities can be classified into three primary categories with nine sub-categories and, thus, a PSM capability matrix with four evaluation categories can be formed. The buyer profiles are moreover identified to reveal the PSM capability gap. The resource-based view (RBV) and dynamic capabilities view (DCV) are used to define the individual and organizational capabilities. The PSM literature is also used to define the capabilities. The key findings of this study are i) the PSM capability matrix to identify the PSM capabilities, ii) the evaluation of the capabilities to define PSM capability gaps and iii) the presentation of the buyer profiles to identify the individual PSM capabilities and to define the organizational PSM capabilities. Dynamic capabilities are also related to the PSM capability gap. If a gap is identified, the organization can renew their PSM capabilities and, thus, create mutual learning and increase their organizational capabilities. And only then, there is potential for dynamic capabilities. Based on this, the purchasing strategy, purchasing policy and procedures should be identified and implemented dynamically.
Resumo:
A company’s capability to map out its cost position compared to other market players is important for competitive decision making. One aspect of cost position is direct product cost that illustrates the cost efficiency of a company’s product designs. If a company can evaluate and compare its own and other market players’ direct product costs, it can implement better decisions in product development and management, manufacturing, sourcing, etc. The main objective of this thesis was to develop a cost evaluation process for competitors’ products. This objective includes a process description and an analysis tool for cost evaluations. Additionally, process implementation is discussed as well. The main result of this thesis was a process description consisting of a sixteen steps process and an Excel based analysis tool. Since literature was quite limited in this field, the solution proposal was combined from many different theoretical concepts. It includes influences from reverse engineering, product cost assessment, benchmarking and cost based decision making. This solution proposal will lead to more systematic and standardized cost position analyses and result in better cost transparency in decision making.
Resumo:
Programming and mathematics are core areas of computer science (CS) and consequently also important parts of CS education. Introductory instruction in these two topics is, however, not without problems. Studies show that CS students find programming difficult to learn and that teaching mathematical topics to CS novices is challenging. One reason for the latter is the disconnection between mathematics and programming found in many CS curricula, which results in students not seeing the relevance of the subject for their studies. In addition, reports indicate that students' mathematical capability and maturity levels are dropping. The challenges faced when teaching mathematics and programming at CS departments can also be traced back to gaps in students' prior education. In Finland the high school curriculum does not include CS as a subject; instead, focus is on learning to use the computer and its applications as tools. Similarly, many of the mathematics courses emphasize application of formulas, while logic, formalisms and proofs, which are important in CS, are avoided. Consequently, high school graduates are not well prepared for studies in CS. Motivated by these challenges, the goal of the present work is to describe new approaches to teaching mathematics and programming aimed at addressing these issues: Structured derivations is a logic-based approach to teaching mathematics, where formalisms and justifications are made explicit. The aim is to help students become better at communicating their reasoning using mathematical language and logical notation at the same time as they become more confident with formalisms. The Python programming language was originally designed with education in mind, and has a simple syntax compared to many other popular languages. The aim of using it in instruction is to address algorithms and their implementation in a way that allows focus to be put on learning algorithmic thinking and programming instead of on learning a complex syntax. Invariant based programming is a diagrammatic approach to developing programs that are correct by construction. The approach is based on elementary propositional and predicate logic, and makes explicit the underlying mathematical foundations of programming. The aim is also to show how mathematics in general, and logic in particular, can be used to create better programs.
Resumo:
The main objective of this thesis is to evaluate the economic and environmental effectiveness of three different renewable energy systems: solar PV, wind energy and biomass energy systems. Financial methods such as Internal Rate of Return (IRR) and Modified Internal Rate of Return (MIRR) were used to evaluate economic competitiveness. Seasonal variability in power generation capability of different renewable systems were also taken into consideration. In order to evaluate the environmental effectiveness of different energy systems, default values in GaBi software were taken by defining the functional unit as 1kWh. The results show that solar PV systems are difficult to justify both in economic as well as environmental grounds. Wind energy performs better in both economic and environmental grounds and has the capability to compete with conventional energy systems. Biomass energy systems exhibit environmental and economic performance at the middle level. In each of these systems, results vary.
Resumo:
The standard squirrel-cage induction machine has nearly reached its maximum efficiency. In order to further increase the energy efficiency of electrical machines, the use of permanent magnets in combination with the robust design and the line start capability of the induction machine is extensively investigated. Many experimental designs have been suggested in literature, but recently, these line-start permanent-magnet machines (LSPMMs) have become off-the-shelf products available in a power range up to 7.5 kW. The permanent magnet flux density is a function of the operating temperature. Consequently, the temperature will affect almost every electrical quantity of the machine, including current, torque, and efficiency. In this paper, the efficiency of an off-the-shelf 4-kW three-phase LSPMM is evaluated as a function of the temperature by both finite-element modeling and by practical measurements. In order to obtain stator, rotor, and permanent magnet temperatures, lumped thermal modeling is used.