976 resultados para measurement technology


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many public organisations have been under great pressure in recent years to increase the efficiency and transparency of outputs, to rationalise the use of public resources, and to increase the quality of service delivery. In this context, public organisations were encouraged to introduce the New Public Management reforms with the goal of improving the efficiency and effectiveness of the performance organisation through a new public management model. This new public management model is based on measurement by outputs and outcomes, a clear definition of responsibilities, the transparency and accountability of governmental activities, and on a greater value for citizens. What type of performance measurement systems are used in police services? Based on the literature, we see that multidimensional models, such as the Balanced Scorecard, are important in many public organisations, like municipalities, universities, and hospitals. Police services are characterised by complex, diverse objectives and stakeholders. Therefore, performance measurement of these public services calls for a specific analysis. Based on a nationwide survey of all police chiefs of the Portuguese police force, we find that employee performance measurement is the main form of measurement. Also, we propose a strategic map for the Portuguese police service.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper discusses the photodiode capacitance dependence on imposed light and applied voltage using different devices. The first device is a double amorphous silicon pin-pin photodiode; the second one a crystalline pin diode and the last one a single pin amorphous silicon diode. Double amorphous silicon diodes can be used as (de)multiplexer devices for optical communications. For short range applications, using plastic optical fibres, the WDM (wavelength-division multiplexing) technique can be used in the visible light range to encode multiple signals. Experimental results consist on measurements of the photodiode capacitance under different conditions of imposed light and applied voltage. The relation between the capacitive effects of the double diode and the quality of the semiconductor internal junction will be analysed. The dynamics of charge accumulations will be measured when the photodiode is illuminated by a pulsed monochromatic light.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Introduction: Standard Uptake Value (SUV) is a measurement of the uptake in a tumour normalized on the basis of a distribution volume and is used to quantify 18F-Fluorodeoxiglucose (FDG) uptake in tumors, such as primary lung tumor. Several sources of error can affect its accuracy. Normalization can be based on body weight, body surface area (BSA) and lean body mass (LBM). The aim of this study is to compare the influence of 3 normalization volumes in the calculation of SUV: body weight (SUVW), BSA (SUVBSA) and LBM (SUVLBM), with and without glucose correction, in patients with known primary lung tumor. The correlation between SUV and weight, height, blood glucose level, injected activity and time between injection and image acquisition is evaluated. Methods: Sample included 30 subjects (8 female and 22 male) with primary lung tumor, with clinical indication for 18F-FDG Positron Emission Tomography (PET). Images were acquired on a Siemens Biography according to the department’s protocol. Maximum pixel SUVW was obtained for abnormal uptake focus through semiautomatic VOI with Quantification 3D isocontour (threshold 2.5). The concentration of radioactivity (kBq/ml) was obtained from SUVW, SUVBSA, SUVLBM and the glucose corrected SUV were mathematically obtained. Results: Statistically significant differences between SUVW, SUVBSA and SUVLBM and between SUVWgluc, SUVBSAgluc and SUVLBMgluc were observed (p=0.000<0.05). The blood glucose level showed significant positive correlations with SUVW (r=0.371; p=0.043) and SUVLBM (r=0.389; p=0.034). SUVBSA showed independence of variations with the blood glucose level. Conclusion: The measurement of a radiopharmaceutical tumor uptake normalized on the basis of different distribution volumes is still variable. Further investigation on this subject is recommended.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La Teoria de la Relativitat General preveu que quan un objecte massiu és sotmès a una certa acceleració en certes condicions ha d’emetre ones gravitacionals. Es tracta d’un tipus d’on altament energètica però que interacciona amb la matèria de manera molt feble i el seu punt d’emissió és força llunyà. Per la qual cosa la seva detecció és una tasca extraordinàriament complicada. Conseqüentment, la detecció d’aquestes ones es creu molt més factible utilitzant instruments situats a l’espai. Amb aquest objectiu, neis la missió LISA (Laser Interferometer Space Antenna). Es tracta aquesta d’una missió conjunta entre la NASA i l’ESA amb llançament previst per 2020-2025. Per reduir els riscs que comporta una primera utilització de tecnologia no testejada, unit a l’alt cost econòmic de la missió LISA. Aquesta missió contindrà instruments molt avançats: el LTP (LISA Technoplogy Package), desenvolupat per la Unió Europea, que provarà la tecnologia de LISA i el Drag Free flying system, que s’encarregarà de provar una sèrie de propulsors (thrusters) utilitzats per al control d’actitud i posició de satèl•lit amb precisió de nanòmetres. Particularment, el LTP, està composat per dues masses de prova separades per 35 centímetres, i d’un interferòmetre làser que mesura la variació de la distància relativa entre elles. D’aquesta manera, el LTP mesurarà les prestacions dels equips i les possibles interferències que afecten a la mesura. Entre les fonts de soroll es troben, entre d’altres, el vent i pressió de radiació solar, les càrregues electrostàtiques, el gradient tèrmic, les fluctuacions de voltatge o les forces internes. Una de les possibles causes de soroll és aquella que serà l’objecte d’estudi en aquest projecte de tesi doctoral: la presència dintre del LTP de camps magnètics, que exerceixen una força sobre les masses de prova, la seva estimació i el seu control, prenent en compte les caracterírstiques magnètiques de l’experiment i la dinàmica del satèl•lit.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

PURPOSE: To evaluate the effects of recent advances in magnetic resonance imaging (MRI) radiofrequency (RF) coil and parallel imaging technology on brain volume measurement consistency. MATERIALS AND METHODS: In all, 103 whole-brain MRI volumes were acquired at a clinical 3T MRI, equipped with a 12- and 32-channel head coil, using the T1-weighted protocol as employed in the Alzheimer's Disease Neuroimaging Initiative study with parallel imaging accelerations ranging from 1 to 5. An experienced reader performed qualitative ratings of the images. For quantitative analysis, differences in composite width (CW, a measure of image similarity) and boundary shift integral (BSI, a measure of whole-brain atrophy) were calculated. RESULTS: Intra- and intersession comparisons of CW and BSI measures from scans with equal acceleration demonstrated excellent scan-rescan accuracy, even at the highest acceleration applied. Pairs-of-scans acquired with different accelerations exhibited poor scan-rescan consistency only when differences in the acceleration factor were maximized. A change in the coil hardware between compared scans was found to bias the BSI measure. CONCLUSION: The most important findings are that the accelerated acquisitions appear to be compatible with the assessment of high-quality quantitative information and that for highest scan-rescan accuracy in serial scans the acquisition protocol should be kept as consistent as possible over time. J. Magn. Reson. Imaging 2012;36:1234-1240. ©2012 Wiley Periodicals, Inc.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A crucial method for investigating patients with coronary artery disease (CAD) is the calculation of the left ventricular ejection fraction (LVEF). It is, consequently, imperative to precisely estimate the value of LVEF--a process that can be done with myocardial perfusion scintigraphy. Therefore, the present study aimed to establish and compare the estimation performance of the quantitative parameters of the reconstruction methods filtered backprojection (FBP) and ordered-subset expectation maximization (OSEM). METHODS: A beating-heart phantom with known values of end-diastolic volume, end-systolic volume, and LVEF was used. Quantitative gated SPECT/quantitative perfusion SPECT software was used to obtain these quantitative parameters in a semiautomatic mode. The Butterworth filter was used in FBP, with the cutoff frequencies between 0.2 and 0.8 cycles per pixel combined with the orders of 5, 10, 15, and 20. Sixty-three reconstructions were performed using 2, 4, 6, 8, 10, 12, and 16 OSEM subsets, combined with several iterations: 2, 4, 6, 8, 10, 12, 16, 32, and 64. RESULTS: With FBP, the values of end-diastolic, end-systolic, and the stroke volumes rise as the cutoff frequency increases, whereas the value of LVEF diminishes. This same pattern is verified with the OSEM reconstruction. However, with OSEM there is a more precise estimation of the quantitative parameters, especially with the combinations 2 iterations × 10 subsets and 2 iterations × 12 subsets. CONCLUSION: The OSEM reconstruction presents better estimations of the quantitative parameters than does FBP. This study recommends the use of 2 iterations with 10 or 12 subsets for OSEM and a cutoff frequency of 0.5 cycles per pixel with the orders 5, 10, or 15 for FBP as the best estimations for the left ventricular volumes and ejection fraction quantification in myocardial perfusion scintigraphy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A new approach to the local measurement of residual stress in microstructures is described in this paper. The presented technique takes advantage of the combined milling-imaging features of a focused ion beam (FIB) equipment to scale down the widely known hole drilling method. This method consists of drilling a small hole in a solid with inherent residual stresses and measuring the strains/displacements caused by the local stress release, that takes place around the hole. In the presented case, the displacements caused by the milling are determined by applying digital image correlation (DIC) techniques to high resolution micrographs taken before and after the milling process. The residual stress value is then obtained by fitting the measured displacements to the analytical solution of the displacement fields. The feasibility of this approach has been demonstrated on a micromachined silicon nitride membrane showing that this method has high potential for applications in the field of mechanical characterization of micro/nanoelectromechanical systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Photon migration in a turbid medium has been modeled in many different ways. The motivation for such modeling is based on technology that can be used to probe potentially diagnostic optical properties of biological tissue. Surprisingly, one of the more effective models is also one of the simplest. It is based on statistical properties of a nearest-neighbor lattice random walk. Here we develop a theory allowing one to calculate the number of visits by a photon to a given depth, if it is eventually detected at an absorbing surface. This mimics cw measurements made on biological tissue and is directed towards characterizing the depth reached by photons injected at the surface. Our development of the theory uses formalism based on the theory of a continuous-time random walk (CTRW). Formally exact results are given in the Fourier-Laplace domain, which, in turn, are used to generate approximations for parameters of physical interest.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This report documents an extensive field program carried out to identify the relationships between soil engineering properties, as measured by various in situ devices, and the results of machine compaction monitoring using prototype compaction monitoring technology developed by Caterpillar Inc. Primary research tasks for this study include the following: (1) experimental testing and statistical analyses to evaluate machine power in terms of the engineering properties of the compacted soil (e.g., density, strength, stiffness) and (2) recommendations for using the compaction monitoring technology in practice. The compaction monitoring technology includes sensors that monitor the power consumption used to move the compaction machine, an on-board computer and display screen, and a GPS system to map the spatial location of the machine. In situ soil density, strength, and stiffness data characterized the soil at various stages of compaction. For each test strip or test area, in situ soil properties were compared directly to machine power values to establish statistical relationships. Statistical models were developed to predict soil density, strength, and stiffness from the machine power values. Field data for multiple test strips were evaluated. The R2 correlation coefficient was generally used to assess the quality of the regressions. Strong correlations were observed between averaged machine power and field measurement data. The relationships are based on the compaction model derived from laboratory data. Correlation coefficients (R2) were consistently higher for thicker lifts than for thin lifts, indicating that the depth influencing machine power response exceeds the representative lift thickness encountered under field conditions. Caterpillar Inc. compaction monitoring technology also identified localized areas of an earthwork project with weak or poorly compacted soil. The soil properties at these locations were verified using in situ test devices. This report also documents the steps required to implement the compaction monitoring technology evaluated.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Computer-Aided Tomography Angiography (CTA) images are the standard for assessing Peripheral artery disease (PAD). This paper presents a Computer Aided Detection (CAD) and Computer Aided Measurement (CAM) system for PAD. The CAD stage detects the arterial network using a 3D region growing method and a fast 3D morphology operation. The CAM stage aims to accurately measure the artery diameters from the detected vessel centerline, compensating for the partial volume effect using Expectation Maximization (EM) and a Markov Random field (MRF). The system has been evaluated on phantom data and also applied to fifteen (15) CTA datasets, where the detection accuracy of stenosis was 88% and the measurement accuracy was with an 8% error.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Due to the intense international competition, demanding, and sophisticated customers, and diverse transforming technological change, organizations need to renew their products and services by allocating resources on research and development (R&D). Managing R&D is complex, but vital for many organizations to survive in the dynamic, turbulent environment. Thus, the increased interest among decision-makers towards finding the right performance measures for R&D is understandable. The measures or evaluation methods of R&D performance can be utilized for multiple purposes; for strategic control, for justifying the existence of R&D, for providing information and improving activities, as well as for the purposes of motivating and benchmarking. The earlier research in the field of R&D performance analysis has generally focused on either the activities and considerable factors and dimensions - e.g. strategic perspectives, purposes of measurement, levels of analysis, types of R&D or phases of R&D process - prior to the selection of R&Dperformance measures, or on proposed principles or actual implementation of theselection or design processes of R&D performance measures or measurement systems. This study aims at integrating the consideration of essential factors anddimensions of R&D performance analysis to developed selection processes of R&D measures, which have been applied in real-world organizations. The earlier models for corporate performance measurement that can be found in the literature, are to some extent adaptable also to the development of measurement systemsand selecting the measures in R&D activities. However, it is necessary to emphasize the special aspects related to the measurement of R&D performance in a way that make the development of new approaches for especially R&D performance measure selection necessary: First, the special characteristics of R&D - such as the long time lag between the inputs and outcomes, as well as the overall complexity and difficult coordination of activities - influence the R&D performance analysis problems, such as the need for more systematic, objective, balanced and multi-dimensional approaches for R&D measure selection, as well as the incompatibility of R&D measurement systems to other corporate measurement systems and vice versa. Secondly, the above-mentioned characteristics and challenges bring forth the significance of the influencing factors and dimensions that need to be recognized in order to derive the selection criteria for measures and choose the right R&D metrics, which is the most crucial step in the measurement system development process. The main purpose of this study is to support the management and control of the research and development activities of organizations by increasing the understanding of R&D performance analysis, clarifying the main factors related to the selection of R&D measures and by providing novel types of approaches and methods for systematizing the whole strategy- and business-based selection and development process of R&D indicators.The final aim of the research is to support the management in their decision making of R&D with suitable, systematically chosen measures or evaluation methods of R&D performance. Thus, the emphasis in most sub-areas of the present research has been on the promotion of the selection and development process of R&D indicators with the help of the different tools and decision support systems, i.e. the research has normative features through providing guidelines by novel types of approaches. The gathering of data and conducting case studies in metal and electronic industry companies, in the information and communications technology (ICT) sector, and in non-profit organizations helped us to formulate a comprehensive picture of the main challenges of R&D performance analysis in different organizations, which is essential, as recognition of the most importantproblem areas is a very crucial element in the constructive research approach utilized in this study. Multiple practical benefits regarding the defined problemareas could be found in the various constructed approaches presented in this dissertation: 1) the selection of R&D measures became more systematic when compared to the empirical analysis, as it was common that there were no systematic approaches utilized in the studied organizations earlier; 2) the evaluation methods or measures of R&D chosen with the help of the developed approaches can be more directly utilized in the decision-making, because of the thorough consideration of the purpose of measurement, as well as other dimensions of measurement; 3) more balance to the set of R&D measures was desired and gained throughthe holistic approaches to the selection processes; and 4) more objectivity wasgained through organizing the selection processes, as the earlier systems were considered subjective in many organizations. Scientifically, this dissertation aims to make a contribution to the present body of knowledge of R&D performance analysis by facilitating dealing with the versatility and challenges of R&D performance analysis, as well as the factors and dimensions influencing the selection of R&D performance measures, and by integrating these aspects to the developed novel types of approaches, methods and tools in the selection processes of R&D measures, applied in real-world organizations. In the whole research, facilitation of dealing with the versatility and challenges in R&D performance analysis, as well as the factors and dimensions influencing the R&D performance measure selection are strongly integrated with the constructed approaches. Thus, the research meets the above-mentioned purposes and objectives of the dissertation from the scientific as well as from the practical point of view.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fixed Mobile Convergence is the recent buzz in the field of telecommunication technology. Unlicensed Mobile Access (UMA) technology is a realistic implementation of Fixed Mobile Convergence. UMA involves communication between different types of networks. Handover is a very important issue in UMA. The study is an analysis of theoretical handover mechanism and practical test results. It includes a new proposal for handover performance test in UMA. It also provides an overview of basic handover operation on different scenarios in UMA. The practical test involves an experiment on handover performance test using network analyzers. The new proposal provides a different approach for an experimental setting on handover performance test without using network analyzers. The approach is not be implemented because of some technical problem in a network equipment in UMA. The analysis of the test results reveals that time of handover between UMA and Global System for Mobile Communication (GSM) network is similar to time of handover between inter base station controller (BSC) handover in GSM networks. The new approach is simple and provides measurement at the end point communicating entities. The study gives a general understanding of handover operation, an analysis of handover performance in UMA and specifically provides a new approach useful for further study of handover in different real world environments and scenarios.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Induction motors are widely used in industry, and they are generally considered very reliable. They often have a critical role in industrial processes, and their failure can lead to significant losses as a result of shutdown times. Typical failures of induction motors can be classified into stator, rotor, and bearing failures. One of the reasons for a bearing damage and eventually a bearing failure is bearing currents. Bearing currents in induction motors can be divided into two main categories; classical bearing currents and inverter-induced bearing currents. A bearing damage caused by bearing currents results, for instance, from electrical discharges that take place through the lubricant film between the raceways of the inner and the outer ring and the rolling elements of a bearing. This phenomenon can be considered similar to the one of electrical discharge machining, where material is removed by a series of rapidly recurring electrical arcing discharges between an electrode and a workpiece. This thesis concentrates on bearing currents with a special reference to bearing current detection in induction motors. A bearing current detection method based on radio frequency impulse reception and detection is studied. The thesis describes how a motor can work as a “spark gap” transmitter and discusses a discharge in a bearing as a source of radio frequency impulse. It is shown that a discharge, occurring due to bearing currents, can be detected at a distance of several meters from the motor. The issues of interference, detection, and location techniques are discussed. The applicability of the method is shown with a series of measurements with a specially constructed test motor and an unmodified frequency-converter-driven motor. The radio frequency method studied provides a nonintrusive method to detect harmful bearing currents in the drive system. If bearing current mitigation techniques are applied, their effectiveness can be immediately verified with the proposed method. The method also gives a tool to estimate the harmfulness of the bearing currents by making it possible to detect and locate individual discharges inside the bearings of electric motors.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The research around performance measurement and management has focused mainly on the design, implementation and use of performance measurement systems. However, there is little evidence about the actual impacts of performance measurement on the different levels of business and operations of organisations, as well as the underlying factors that lead to a positive impact of performance measurement. The study thus focuses on this research gap, which can be considered both important and challenging to cover. The first objective of the study was to examine the impacts of performance measurement on different aspects of management, leadership and the quality of working life, after which the factors that facilitate and improve performance and performance measurement at the operative level of an organisation were examined. The second objective was to study how these factors operate in practice. The third objective focused on the construction of a framework for successful operative level performance measurement and the utilisation of the factors in the organisations. The research objectives have been studied through six research papers utilising empirical data from three separate studies, including two sets of interview data and one of quantitative data. The study applies mainly the hermeneutical research approach. As a contribution of the study, a framework for successful operative level performance measurement was formed by matching the findings of the current study and performance measurement theory. The study extents the prior research regarding the impacts of performance measurement and the factors that have a positive effect on operative level performance and performance measurement. The results indicate that under suitable circumstances, performance measurement has positive impacts on different aspects of management, leadership, and the quality of working life. The results reveal that for example the perception of the employees and the management of the impacts of performance measurement on leadership style differ considerably. Furthermore, the fragmented literature has been reorganised into six factors that facilitate and improve the performance of the operations and employees, and the use of performance measurement at the operative level of an organisation. Regarding the managerial implications of the study, managers who operate around performance measurement can utilise the framework for example by putting the different phases of the framework into practice.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of the present study was to demonstrate the wide applicability of the novel photoluminescent labels called upconverting phosphors (UCPs) in proximity-based bioanalytical assays. The exceptional features of the lanthanide-doped inorganic UCP compounds stem from their capability for photon upconversion resulting in anti-Stokes photoluminescence at visible wavelengths under near-infrared (NIR) excitation. Major limitations related to conventional photoluminescent labels are avoided, rendering the UCPs a competitive next-generation label technology. First, the background luminescence is minimized due to total elimination of autofluorescence. Consequently, improvements in detectability are expected. Second, at the long wavelengths (>600 nm) used for exciting and detecting the UCPs, the transmittance of sample matrixes is significantly greater in comparison with shorter wavelengths. Colored samples are no longer an obstacle to the luminescence measurement, and more flexibility is allowed even in homogeneous assay concepts, where the sample matrix remains present during the entire analysis procedure, including label detection. To transform a UCP particle into a biocompatible label suitable for bioanalytical assays, it must be colloidal in an aqueous environment and covered with biomolecules capable of recognizing the analyte molecule. At the beginning of this study, only UCP bulk material was available, and it was necessary to process the material to submicrometer-sized particles prior to use. Later, the ground UCPs, with irregular shape, wide size-distribution and heterogeneous luminescence properties, were substituted by a smaller-sized spherical UCP material. The surface functionalization of the UCPs was realized by producing a thin hydrophilic coating. Polymer adsorption on the UCP surface is a simple way to introduce functional groups for bioconjugation purposes, but possible stability issues encouraged us to optimize an optional silica-encapsulation method which produces a coating that is not detached in storage or assay conditions. An extremely thin monolayer around the UCPs was pursued due to their intended use as short-distance energy donors, and much attention was paid to controlling the thickness of the coating. The performance of the UCP technology was evaluated in three different homogeneous resonance energy transfer-based bioanalytical assays: a competitive ligand binding assay, a hybridization assay for nucleic acid detection and an enzyme activity assay. To complete the list, a competitive immunoassay has been published previously. Our systematic investigation showed that a nonradiative energy transfer mechanism is indeed involved, when a UCP and an acceptor fluorophore are brought into close proximity in aqueous suspension. This process is the basis for the above-mentioned homogeneous assays, in which the distance between the fluorescent species depends on a specific biomolecular binding event. According to the studies, the submicrometer-sized UCP labels allow versatile proximity-based bioanalysis with low detection limits (a low-nanomolar concentration for biotin, 0.01 U for benzonase enzyme, 0.35 nM for target DNA sequence).