906 resultados para Classical measurement error model


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Tämä tutkimus pyrkii selvittämään, miten toimitusketjun suorituskykyä voidaan mitata kohdeyrityksessä. Supply Chain Council (SCC) on vuonna 1996 kehittänyt Supply Chain Operations Reference (SCOR) – mallin, joka mahdollistaa myös suorituskyvyn mittaamisen. Tämän tutkimuksen tarkoituksena on soveltaa SCOR-mallin suorituskyvyn mittausmallia kohdeyrityksessä. Työ on kvalitatiivinen tapaustutkimus. Työn teoriaosassa on pääasiallisesti käsitelty toimitusketjua ja suorituskyvyn mittaamista koskevaa kirjallisuutta. Mittausjärjestelmän luominen alkaa kohdeyrityksen esittelyllä. SCOR – mallin mittarit on kohdeyrityksessä rakennettu SCC:n ehdotusten mukaisesti, jotta mittareiden tulokset olisivat käyttökelpoisia myös benchmarkkausta varten. Malli sisältää 10 SCOR – mittaria, sekä muutamia muita Haltonin omia mittareita. Lopputuloksena voidaan nähdä, että SCOR – malli antaa hyvän yleiskuvan toimitusketjun suorituskyvystä, mutta kohdeyrityksessä on silti tarvetta kehittää edelleen informatiivisempia mittareita, jotka antaisivat yksityiskohtaisempaa tietoa kohdeyrityksen johdolle.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study used event-related brain potentials to investigate whether math anxiety is related to abnormal error monitoring processing. Seventeen high math-anxious (HMA) and seventeen low math-anxious (LMA) individuals were presented with a numerical and a classical Stroop task. Groups did not differ in terms of trait or state anxiety. We found enhanced error-related negativity (ERN) in the HMA group when subjects committed an error on the numerical Stroop task, but not on the classical Stroop task. Groups did not differ in terms of the correct-related negativity component (CRN), the error positivity component (Pe), classical behavioral measures or post-error measures. The amplitude of the ERN was negatively related to participants" math anxiety scores, showing a more negative amplitude as the score increased. Moreover, using standardized low resolution electromagnetic tomography (sLORETA) we found greater activation of the insula in errors on a numerical task as compared to errors in a nonnumerical task only for the HMA group. The results were interpreted according to the motivational significance theory of the ERN.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

For the past 10 years, mini-host models and in particular the greater wax moth Galleria mellonella have tended to become a surrogate for murine models of fungal infection mainly due to cost, ethical constraints and ease of use. Thus, methods to better assess the fungal pathogenesis in G. mellonella need to be developed. In this study, we implemented the detection of Candida albicans cells expressing the Gaussia princeps luciferase in its cell wall in infected larvae of G. mellonella. We demonstrated that detection and quantification of luminescence in the pulp of infected larvae is a reliable method to perform drug efficacy and C. albicans virulence assays as compared to fungal burden assay. Since the linearity of the bioluminescent signal, as compared to the CFU counts, has a correlation of R(2) = 0.62 and that this method is twice faster and less labor intensive than classical fungal burden assays, it could be applied to large scale studies. We next visualized and followed C. albicans infection in living G. mellonella larvae using a non-toxic and water-soluble coelenterazine formulation and a CCD camera that is commonly used for chemoluminescence signal detection. This work allowed us to follow for the first time C. albicans course of infection in G. mellonella during 4 days.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La tomodensitométrie (TDM) est une technique d'imagerie pour laquelle l'intérêt n'a cessé de croitre depuis son apparition au début des années 70. De nos jours, l'utilisation de cette technique est devenue incontournable, grâce entre autres à sa capacité à produire des images diagnostiques de haute qualité. Toutefois, et en dépit d'un bénéfice indiscutable sur la prise en charge des patients, l'augmentation importante du nombre d'examens TDM pratiqués soulève des questions sur l'effet potentiellement dangereux des rayonnements ionisants sur la population. Parmi ces effets néfastes, l'induction de cancers liés à l'exposition aux rayonnements ionisants reste l'un des risques majeurs. Afin que le rapport bénéfice-risques reste favorable au patient il est donc nécessaire de s'assurer que la dose délivrée permette de formuler le bon diagnostic tout en évitant d'avoir recours à des images dont la qualité est inutilement élevée. Ce processus d'optimisation, qui est une préoccupation importante pour les patients adultes, doit même devenir une priorité lorsque l'on examine des enfants ou des adolescents, en particulier lors d'études de suivi requérant plusieurs examens tout au long de leur vie. Enfants et jeunes adultes sont en effet beaucoup plus sensibles aux radiations du fait de leur métabolisme plus rapide que celui des adultes. De plus, les probabilités des évènements auxquels ils s'exposent sont également plus grandes du fait de leur plus longue espérance de vie. L'introduction des algorithmes de reconstruction itératifs, conçus pour réduire l'exposition des patients, est certainement l'une des plus grandes avancées en TDM, mais elle s'accompagne de certaines difficultés en ce qui concerne l'évaluation de la qualité des images produites. Le but de ce travail est de mettre en place une stratégie pour investiguer le potentiel des algorithmes itératifs vis-à-vis de la réduction de dose sans pour autant compromettre la qualité du diagnostic. La difficulté de cette tâche réside principalement dans le fait de disposer d'une méthode visant à évaluer la qualité d'image de façon pertinente d'un point de vue clinique. La première étape a consisté à caractériser la qualité d'image lors d'examen musculo-squelettique. Ce travail a été réalisé en étroite collaboration avec des radiologues pour s'assurer un choix pertinent de critères de qualité d'image. Une attention particulière a été portée au bruit et à la résolution des images reconstruites à l'aide d'algorithmes itératifs. L'analyse de ces paramètres a permis aux radiologues d'adapter leurs protocoles grâce à une possible estimation de la perte de qualité d'image liée à la réduction de dose. Notre travail nous a également permis d'investiguer la diminution de la détectabilité à bas contraste associée à une diminution de la dose ; difficulté majeure lorsque l'on pratique un examen dans la région abdominale. Sachant que des alternatives à la façon standard de caractériser la qualité d'image (métriques de l'espace Fourier) devaient être utilisées, nous nous sommes appuyés sur l'utilisation de modèles d'observateurs mathématiques. Nos paramètres expérimentaux ont ensuite permis de déterminer le type de modèle à utiliser. Les modèles idéaux ont été utilisés pour caractériser la qualité d'image lorsque des paramètres purement physiques concernant la détectabilité du signal devaient être estimés alors que les modèles anthropomorphes ont été utilisés dans des contextes cliniques où les résultats devaient être comparés à ceux d'observateurs humain, tirant profit des propriétés de ce type de modèles. Cette étude a confirmé que l'utilisation de modèles d'observateurs permettait d'évaluer la qualité d'image en utilisant une approche basée sur la tâche à effectuer, permettant ainsi d'établir un lien entre les physiciens médicaux et les radiologues. Nous avons également montré que les reconstructions itératives ont le potentiel de réduire la dose sans altérer la qualité du diagnostic. Parmi les différentes reconstructions itératives, celles de type « model-based » sont celles qui offrent le plus grand potentiel d'optimisation, puisque les images produites grâce à cette modalité conduisent à un diagnostic exact même lors d'acquisitions à très basse dose. Ce travail a également permis de clarifier le rôle du physicien médical en TDM: Les métriques standards restent utiles pour évaluer la conformité d'un appareil aux requis légaux, mais l'utilisation de modèles d'observateurs est inévitable pour optimiser les protocoles d'imagerie. -- Computed tomography (CT) is an imaging technique in which interest has been quickly growing since it began to be used in the 1970s. Today, it has become an extensively used modality because of its ability to produce accurate diagnostic images. However, even if a direct benefit to patient healthcare is attributed to CT, the dramatic increase in the number of CT examinations performed has raised concerns about the potential negative effects of ionising radiation on the population. Among those negative effects, one of the major risks remaining is the development of cancers associated with exposure to diagnostic X-ray procedures. In order to ensure that the benefits-risk ratio still remains in favour of the patient, it is necessary to make sure that the delivered dose leads to the proper diagnosis without producing unnecessarily high-quality images. This optimisation scheme is already an important concern for adult patients, but it must become an even greater priority when examinations are performed on children or young adults, in particular with follow-up studies which require several CT procedures over the patient's life. Indeed, children and young adults are more sensitive to radiation due to their faster metabolism. In addition, harmful consequences have a higher probability to occur because of a younger patient's longer life expectancy. The recent introduction of iterative reconstruction algorithms, which were designed to substantially reduce dose, is certainly a major achievement in CT evolution, but it has also created difficulties in the quality assessment of the images produced using those algorithms. The goal of the present work was to propose a strategy to investigate the potential of iterative reconstructions to reduce dose without compromising the ability to answer the diagnostic questions. The major difficulty entails disposing a clinically relevant way to estimate image quality. To ensure the choice of pertinent image quality criteria this work was continuously performed in close collaboration with radiologists. The work began by tackling the way to characterise image quality when dealing with musculo-skeletal examinations. We focused, in particular, on image noise and spatial resolution behaviours when iterative image reconstruction was used. The analyses of the physical parameters allowed radiologists to adapt their image acquisition and reconstruction protocols while knowing what loss of image quality to expect. This work also dealt with the loss of low-contrast detectability associated with dose reduction, something which is a major concern when dealing with patient dose reduction in abdominal investigations. Knowing that alternative ways had to be used to assess image quality rather than classical Fourier-space metrics, we focused on the use of mathematical model observers. Our experimental parameters determined the type of model to use. Ideal model observers were applied to characterise image quality when purely objective results about the signal detectability were researched, whereas anthropomorphic model observers were used in a more clinical context, when the results had to be compared with the eye of a radiologist thus taking advantage of their incorporation of human visual system elements. This work confirmed that the use of model observers makes it possible to assess image quality using a task-based approach, which, in turn, establishes a bridge between medical physicists and radiologists. It also demonstrated that statistical iterative reconstructions have the potential to reduce the delivered dose without impairing the quality of the diagnosis. Among the different types of iterative reconstructions, model-based ones offer the greatest potential, since images produced using this modality can still lead to an accurate diagnosis even when acquired at very low dose. This work has clarified the role of medical physicists when dealing with CT imaging. The use of the standard metrics used in the field of CT imaging remains quite important when dealing with the assessment of unit compliance to legal requirements, but the use of a model observer is the way to go when dealing with the optimisation of the imaging protocols.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: Underweight and severe and morbid obesity are associated with highly elevated risks of adverse health outcomes. We estimated trends in mean body-mass index (BMI), which characterises its population distribution, and in the prevalences of a complete set of BMI categories for adults in all countries. METHODS: We analysed, with use of a consistent protocol, population-based studies that had measured height and weight in adults aged 18 years and older. We applied a Bayesian hierarchical model to these data to estimate trends from 1975 to 2014 in mean BMI and in the prevalences of BMI categories (<18·5 kg/m(2) [underweight], 18·5 kg/m(2) to <20 kg/m(2), 20 kg/m(2) to <25 kg/m(2), 25 kg/m(2) to <30 kg/m(2), 30 kg/m(2) to <35 kg/m(2), 35 kg/m(2) to <40 kg/m(2), ≥40 kg/m(2) [morbid obesity]), by sex in 200 countries and territories, organised in 21 regions. We calculated the posterior probability of meeting the target of halting by 2025 the rise in obesity at its 2010 levels, if post-2000 trends continue. FINDINGS: We used 1698 population-based data sources, with more than 19·2 million adult participants (9·9 million men and 9·3 million women) in 186 of 200 countries for which estimates were made. Global age-standardised mean BMI increased from 21·7 kg/m(2) (95% credible interval 21·3-22·1) in 1975 to 24·2 kg/m(2) (24·0-24·4) in 2014 in men, and from 22·1 kg/m(2) (21·7-22·5) in 1975 to 24·4 kg/m(2) (24·2-24·6) in 2014 in women. Regional mean BMIs in 2014 for men ranged from 21·4 kg/m(2) in central Africa and south Asia to 29·2 kg/m(2) (28·6-29·8) in Polynesia and Micronesia; for women the range was from 21·8 kg/m(2) (21·4-22·3) in south Asia to 32·2 kg/m(2) (31·5-32·8) in Polynesia and Micronesia. Over these four decades, age-standardised global prevalence of underweight decreased from 13·8% (10·5-17·4) to 8·8% (7·4-10·3) in men and from 14·6% (11·6-17·9) to 9·7% (8·3-11·1) in women. South Asia had the highest prevalence of underweight in 2014, 23·4% (17·8-29·2) in men and 24·0% (18·9-29·3) in women. Age-standardised prevalence of obesity increased from 3·2% (2·4-4·1) in 1975 to 10·8% (9·7-12·0) in 2014 in men, and from 6·4% (5·1-7·8) to 14·9% (13·6-16·1) in women. 2·3% (2·0-2·7) of the world's men and 5·0% (4·4-5·6) of women were severely obese (ie, have BMI ≥35 kg/m(2)). Globally, prevalence of morbid obesity was 0·64% (0·46-0·86) in men and 1·6% (1·3-1·9) in women. INTERPRETATION: If post-2000 trends continue, the probability of meeting the global obesity target is virtually zero. Rather, if these trends continue, by 2025, global obesity prevalence will reach 18% in men and surpass 21% in women; severe obesity will surpass 6% in men and 9% in women. Nonetheless, underweight remains prevalent in the world's poorest regions, especially in south Asia. FUNDING: Wellcome Trust, Grand Challenges Canada.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Classical Monte Carlo simulations were carried out on the NPT ensemble at 25°C and 1 atm, aiming to investigate the ability of the TIP4P water model [Jorgensen, Chandrasekhar, Madura, Impey and Klein; J. Chem. Phys., 79 (1983) 926] to reproduce the newest structural picture of liquid water. The results were compared with recent neutron diffraction data [Soper; Bruni and Ricci; J. Chem. Phys., 106 (1997) 247]. The influence of the computational conditions on the thermodynamic and structural results obtained with this model was also analyzed. The findings were compared with the original ones from Jorgensen et al [above-cited reference plus Mol. Phys., 56 (1985) 1381]. It is notice that the thermodynamic results are dependent on the boundary conditions used, whereas the usual radial distribution functions g(O/O(r)) and g(O/H(r)) do not depend on them.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Using event-related brain potentials, the time course of error detection and correction was studied in healthy human subjects. A feedforward model of error correction was used to predict the timing properties of the error and corrective movements. Analysis of the multichannel recordings focused on (1) the error-related negativity (ERN) seen immediately after errors in response- and stimulus-locked averages and (2) on the lateralized readiness potential (LRP) reflecting motor preparation. Comparison of the onset and time course of the ERN and LRP components showed that the signs of corrective activity preceded the ERN. Thus, error correction was implemented before or at least in parallel with the appearance of the ERN component. Also, the amplitude of the ERN component was increased for errors, followed by fast corrective movements. The results are compatible with recent views considering the ERN component as the output of an evaluative system engaged in monitoring motor conflict.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The data analyzed in this work were generated following the methodology developed by Molina et al.(J. Electroanal. Chem., 1979) for the calibration of a potentiometric system of measurement of hydrogen-ion concentrations resulting from neutralizations, at 25 ºC, of acidic or alkaline solutions at constant ionic strength (0.1 mol.l-1) held with NaClO4. The observed data present a serious deviation in relation to the mathematical model derived from the Nernst equation, for pH values ranging from 3 to 11, where pH=-log[H+]. We show that the minimization of the sum of the absolute values of the residuals gives estimates that are not influenced by outlying values.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The objective of this master’s thesis was to develop a model for mobile subscription acquisition cost, SAC, and mobile subscription retention cost, SRC, by applying activity-based cost accounting principles. The thesis was conducted as a case study for a telecommunication company operating on the Finnish telecommunication market. In addition to activity-based cost accounting there were other theories studied and applied in order to establish a theory framework for this thesis. The concepts of acquisition and retention were explored in a broader context with the concepts of customer satisfaction, loyalty and profitability and eventually customer relationship management to understand the background and meaning of the theme of this thesis. The utilization of SAC and SRC information is discussed through the theories of decision making and activity-based management. Also, the present state and future needs of SAC and SRC information usage at the case company as well as the functions of the company were examined by interviewing some members of the company personnel. With the help of these theories and methods it was aimed at finding out both the theory-based and practical factors which affect the structure of the model. During the thesis study it was confirmed that the existing SAC and SRC model of the case company should be used as the basis in developing the activity-based model. As a result the indirect costs of the old model were transformed into activities and the direct costs were continued to be allocated directly to acquisition of new subscriptions and retention of old subscriptions. The refined model will enable managing the subscription acquisition, retention and the related costs better through the activity information. During the interviews it was found out that the SAC and SRC information is also used in performance measurement and operational and strategic planning. SAC and SRC are not fully absorbed costs and it was concluded that the model serves best as a source of indicative cost information. This thesis does not include calculating costs. Instead, the refined model together with both the theory-based and interview findings concerning the utilization of the information produced by the model will serve as a framework for the possible future development aiming at completing the model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Induction motors are widely used in industry, and they are generally considered very reliable. They often have a critical role in industrial processes, and their failure can lead to significant losses as a result of shutdown times. Typical failures of induction motors can be classified into stator, rotor, and bearing failures. One of the reasons for a bearing damage and eventually a bearing failure is bearing currents. Bearing currents in induction motors can be divided into two main categories; classical bearing currents and inverter-induced bearing currents. A bearing damage caused by bearing currents results, for instance, from electrical discharges that take place through the lubricant film between the raceways of the inner and the outer ring and the rolling elements of a bearing. This phenomenon can be considered similar to the one of electrical discharge machining, where material is removed by a series of rapidly recurring electrical arcing discharges between an electrode and a workpiece. This thesis concentrates on bearing currents with a special reference to bearing current detection in induction motors. A bearing current detection method based on radio frequency impulse reception and detection is studied. The thesis describes how a motor can work as a “spark gap” transmitter and discusses a discharge in a bearing as a source of radio frequency impulse. It is shown that a discharge, occurring due to bearing currents, can be detected at a distance of several meters from the motor. The issues of interference, detection, and location techniques are discussed. The applicability of the method is shown with a series of measurements with a specially constructed test motor and an unmodified frequency-converter-driven motor. The radio frequency method studied provides a nonintrusive method to detect harmful bearing currents in the drive system. If bearing current mitigation techniques are applied, their effectiveness can be immediately verified with the proposed method. The method also gives a tool to estimate the harmfulness of the bearing currents by making it possible to detect and locate individual discharges inside the bearings of electric motors.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The topological solitons of two classical field theories, the Faddeev-Skyrme model and the Ginzburg-Landau model are studied numerically and analytically in this work. The aim is to gain information on the existence and properties of these topological solitons, their structure and behaviour under relaxation. First, the conditions and mechanisms leading to the possibility of topological solitons are explored from the field theoretical point of view. This leads one to consider continuous deformations of the solutions of the equations of motion. The results of algebraic topology necessary for the systematic treatment of such deformations are reviewed and methods of determining the homotopy classes of topological solitons are presented. The Faddeev-Skyrme and Ginzburg-Landau models are presented, some earlier results reviewed and the numerical methods used in this work are described. The topological solitons of the Faddeev-Skyrme model, Hopfions, are found to follow the same mechanisms of relaxation in three different domains with three different topological classifications. For two of the domains, the necessary but unusual topological classification is presented. Finite size topological solitons are not found in the Ginzburg-Landau model and a scaling argument is used to suggest that there are indeed none unless a certain modification to the model, due to R. S. Ward, is made. In that case, the Hopfions of the Faddeev-Skyrme model are seen to be present for some parameter values. A boundary in the parameter space separating the region where the Hopfions exist and the area where they do not exist is found and the behaviour of the Hopfion energy on this boundary is studied.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis was produced for the Technology Marketing unit at the Nokia Research Center. Technology marketing was a new function at Nokia Research Center, and needed an established framework with the capacity to take into account multiple aspects for measuring the team performance. Technology marketing functions had existed in other parts of Nokia, yet no single method had been agreed upon for measuring their performance. The purpose of this study was to develop a performance measurement system for Nokia Research Center Technology Marketing. The target was that Nokia Research Center Technology Marketing had a framework for separate metrics; including benchmarking for starting level and target values in the future planning (numeric values were kept confidential within the company). As a result of this research, the Balanced Scorecard model of Kaplan and Norton, was chosen for the performance measurement system for Nokia Research Center Technology Marketing. This research selected the indicators, which were utilized in the chosen performance measurement system. Furthermore, performance measurement system was defined to guide the Head of Marketing in managing Nokia Research Center Technology Marketing team. During the research process the team mission, vision, strategy and critical success factors were outlined.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The model of Questions Answering (Q&A) for eLearning is based on collaborative learning through questions that are posed by students and their answers to that questions which are given by peers, in contrast with the classical model in which students ask questions to the teacher only. In this proposal we extend the Q&A model including the social presence concept and a quantitative measure of it is proposed; besides it is considered the evolution of the resulting Q&A social network after the inclusion of the social presence and taking into account the feedback on questions posed by students and answered by peers. The social network behaviorwas simulated using a Multi-Agent System to compare the proposed social presence model with the classical and the Q&A models

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Identification of order of an Autoregressive Moving Average Model (ARMA) by the usual graphical method is subjective. Hence, there is a need of developing a technique to identify the order without employing the graphical investigation of series autocorrelations. To avoid subjectivity, this thesis focuses on determining the order of the Autoregressive Moving Average Model using Reversible Jump Markov Chain Monte Carlo (RJMCMC). The RJMCMC selects the model from a set of the models suggested by better fitting, standard deviation errors and the frequency of accepted data. Together with deep analysis of the classical Box-Jenkins modeling methodology the integration with MCMC algorithms has been focused through parameter estimation and model fitting of ARMA models. This helps to verify how well the MCMC algorithms can treat the ARMA models, by comparing the results with graphical method. It has been seen that the MCMC produced better results than the classical time series approach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work is devoted to the development of numerical method to deal with convection diffusion dominated problem with reaction term, non - stiff chemical reaction and stiff chemical reaction. The technique is based on the unifying Eulerian - Lagrangian schemes (particle transport method) under the framework of operator splitting method. In the computational domain, the particle set is assigned to solve the convection reaction subproblem along the characteristic curves created by convective velocity. At each time step, convection, diffusion and reaction terms are solved separately by assuming that, each phenomenon occurs separately in a sequential fashion. Moreover, adaptivities and projection techniques are used to add particles in the regions of high gradients (steep fronts) and discontinuities and transfer a solution from particle set onto grid point respectively. The numerical results show that, the particle transport method has improved the solutions of CDR problems. Nevertheless, the method is time consumer when compared with other classical technique e.g., method of lines. Apart from this advantage, the particle transport method can be used to simulate problems that involve movingsteep/smooth fronts such as separation of two or more elements in the system.