844 resultados para curriculum-based measurement
Resumo:
Huolimatta korkeasta automaatioasteesta sorvausteollisuudessa, muutama keskeinen ongelma estää sorvauksen täydellisen automatisoinnin. Yksi näistä ongelmista on työkalun kuluminen. Tämä työ keskittyy toteuttamaan automaattisen järjestelmän kulumisen, erityisesti viistekulumisen, mittaukseen konenäön avulla. Kulumisen mittausjärjestelmä poistaa manuaalisen mittauksen tarpeen ja minimoi ajan, joka käytetään työkalun kulumisen mittaukseen. Mittauksen lisäksi tutkitaan kulumisen mallinnusta sekä ennustamista. Automaattinen mittausjärjestelmä sijoitettiin sorvin sisälle ja järjestelmä integroitiin onnistuneesti ulkopuolisten järjestelmien kanssa. Tehdyt kokeet osoittivat, että mittausjärjestelmä kykenee mittaamaan työkalun kulumisen järjestelmän oikeassa ympäristössä. Mittausjärjestelmä pystyy myös kestämään häiriöitä, jotka ovat konenäköjärjestelmille yleisiä. Työkalun kulumista mallinnusta tutkittiin useilla eri menetelmillä. Näihin kuuluivat muiden muassa neuroverkot ja tukivektoriregressio. Kokeet osoittivat, että tutkitut mallit pystyivät ennustamaan työkalun kulumisasteen käytetyn ajan perusteella. Parhaan tuloksen antoivat neuroverkot Bayesiläisellä regularisoinnilla.
Resumo:
In Switzerland, the majority of students are oriented towards professional training after compulsory schooling. At this stage, one of the biggest challenges for them is to find an apprenticeship position. Matching supply and demand is a complex process that not only excludes some students from having direct access to professional training but also forces them to make early choices regarding their future sector of employment. So, how does one find an apprenticeship? And what do the students' descriptions of their search for apprenticeships reveal about the institutional determinants of social inequalities at play in the system? Based on 29 interviews conducted in 2014 with 23 apprentices and 6 recruiters in the Canton of Vaud, this article interrogates how the dimensions of educational and social trajectories combine to affect access to apprenticeships and are accentuated by recruiters using a "hidden curriculum" during the recruitment process. A hidden curriculum consists of knowledge and skills not taught by the educational institution but which appear decisive in obtaining an apprenticeship. By analysing the contrasting experiences of students in their search for an apprenticeship, we identify four types of trajectories that explain different types of school-to-apprenticeship transitions. We show how these determinants are reinforced by the "hidden curriculum" of recruitment based on the soft skills of feeling, autonomy, anticipation and reflexivity that are assessed in the context of recruitment interactions. The discussion section debates how the criteria that appear to be used to identify the "right apprentice" tend to (re)produce inequalities between students. This not only depends on their academic results but also on their social and cultural skills, their ability to anticipate their choices and, more widely, their ability to be a subject in their recruitment search. "The Subject is neither the individual, nor the self, but the work through which an individual transforms into an actor, meaning an agent able to transform his/her situation instead of reproducing it." (Touraine, 1992, p.476).
Resumo:
A new family of distortion risk measures -GlueVaR- is proposed in Belles- Sampera et al. -2013- to procure a risk assessment lying between those provided by common quantile-based risk measures. GlueVaR risk measures may be expressed as a combination of these standard risk measures. We show here that this relationship may be used to obtain approximations of GlueVaR measures for general skewed distribution functions using the Cornish-Fisher expansion. A subfamily of GlueVaR measures satisfies the tail-subadditivity property. An example of risk measurement based on real insurance claim data is presented, where implications of tail-subadditivity in the aggregation of risks are illustrated.
Resumo:
Verenpaineen kotimittaus − epidemiologia ja kliininen käyttö Kohonnutta verenpainetta, maailmanlaajuisesti merkittävintä ennenaikaiselle kuolemalle altistavaa riskitekijää, ei voida tunnistaa tai hoitaa ilman tarkkoja ja käytännöllisiä verenpaineen mittausmenetelmiä. Verenpaineen kotimittaus on saavuttanut suuren suosion potilaiden keskuudessa. Lääkärit eivät ole kuitenkaan vielä täysin hyväksyneet verenpaineen kotimittausta, sillä riittävä todistusaineisto sen toimivuudesta ja eduista on puuttunut. Tämän tutkimuksen tarkoituksena oli osoittaa, että kotona mitattu verenpaine (kotipaine) on perinteistä vastaanotolla mitattua verenpainetta (vastaanottopaine) tarkempi, ja että se on tehokas myös kliinisessä käytössä. Tutkimme kotipaineen käyttöä verenpainetaudin diagnosoinnissa ja hoidossa. Lisäksi tarkastelimme kotipaineen yhteyttä verenpainetaudin aiheuttamiin kohde-elinvaurioihin. Ensimmäinen aineisto, joka oli edustava otos Suomen aikuisväestöstä, koostui 2 120 45–74-vuotiaasta tutkimushenkilöstä. Tutkittavat mittasivat kotipainettaan viikon ajan ja osallistuivat terveystarkastukseen, johon sisältyi kliinisen tutkimuksen ja haastattelun lisäksi sydänfilmin otto ja vastaanottopaineen mittaus. 758 tutkittavalle suoritettiin lisäksi kaulavaltimon seinämän intima-mediakerroksen paksuuden (valtimonkovettumataudin mittari) mittaus ja 237:lle valtimon pulssiaallon nopeuden (valtimojäykkyyden mittari) mittaus. Toisessa aineistossa, joka koostui 98 verenpainetautia sairastavasta potilaasta, hoitoa ohjattiin satunnaistamisesta riippuen joko ambulatorisen eli vuorokausirekisteröinnillä mitatun verenpaineen tai kotipaineen perusteella. Vastaanottopaine oli kotipainetta merkittävästi korkeampi (systolisen/diastolisen paineen keskiarvoero oli 8/3 mmHg) ja yksimielisyys verenpainetaudin diagnoosissa kahden menetelmän välillä oli korkeintaan kohtalainen (75 %). 593 tutkittavasta, joilla oli kohonnut verenpaine vastaanotolla, 38 %:lla oli normaali verenpaine kotona eli ns. valkotakkiverenpaine. Verenpainetauti voidaan siis ylidiagnosoida joka kolmannella potilaalla seulontatilanteessa. Valkotakkiverenpaine oli yhteydessä lievästi kohonneeseen verenpaineeseen, matalaan painoindeksiin ja tupakoimattomuuteen, muttei psykiatriseen sairastavuuteen. Valkotakkiverenpaine ei kuitenkaan vaikuttaisi olevan täysin vaaraton ilmiö ja voi ennustaa tulevaa verenpainetautia, sillä siitä kärsivien sydän- ja verisuonitautien riskitekijäprofiili oli normaalipaineisten ja todellisten verenpainetautisten riskitekijäprofiilien välissä. Kotipaineella oli vastaanottopainetta vahvempi yhteys verenpainetaudin aiheuttamiin kohde-elinvaurioihin (intima-mediakerroksen paksuus, pulssiaallon nopeus ja sydänfilmistä todettu vasemman kammion suureneminen). Kotipaine oli tehokas verenpainetaudin hoidon ohjaaja, sillä kotipaineeseen ja ambulatoriseen paineeseen, jota on pidetty verenpainemittauksen ”kultaisena standardina”, perustuva lääkehoidon ohjaus johti yhtä hyvään verenpaineen hallintaan. Tämän ja aikaisempien tutkimusten tulosten pohjalta voidaan todeta, että verenpaineen kotimittaus on selkeä parannus perinteiseen vastaanotolla tapahtuvaan verenpainemittaukseen verrattuna. Verenpaineen kotimittaus on käytännöllinen, tarkka ja laajasti saatavilla oleva menetelmä, josta voi tulla jopa ensisijainen vaihtoehto verenpainetautia diagnosoitaessa ja hoitaessa. Verenpaineen mittauskäytäntöön tarvitaan muutos, sillä näyttöön perustuvan lääketieteen perusteella vaikuttaa, että vastaanotolla tapahtuvaa verenpainemittausta tulisi käyttää vain seulontatarkoitukseen.
Resumo:
PURPOSE: Despite growing interest in measurement of health care quality and patient experience, the current evidence base largely derives from adult health settings, at least in part because of the absence of appropriately developed measurement tools for adolescents. To rectify this, we set out to develop a conceptual framework and a set of indicators to measure the quality of health care delivered to adolescents in hospital. METHODS: A conceptual framework was developed from the following four elements: (1) a review of the evidence around what young people perceive as "adolescent-friendly" health care; (2) an exploration with adolescent patients of the principles of patient-centered care; (3) a scoping review to identify core clinical practices around working with adolescents; and (4) a scoping review of existing conceptual frameworks. Using criteria for indicator development, we then developed a set of indicators that mapped to this framework. RESULTS: Embedded within the notion of patient- and family-centered care, the conceptual framework for adolescent-friendly health care (quality health care for adolescents) was based on the constructs of experience of care (positive engagement with health care) and evidence-informed care. A set of 14 indicators was developed, half of which related to adolescents' and parents' experience of care and half of which related to aspects of evidence-informed care. CONCLUSIONS: The conceptual framework and indicators of quality health care for adolescents set the stage to develop measures to populate these indicators, the next step in the agenda of improving the quality of health care delivered to adolescents in hospital settings.
Resumo:
BACKGROUND: Clinical guidelines are essential in implementing and maintaining nationwide stage-specific diagnostic and therapeutic standards. In 2011, the first German expert consensus guideline defined the evidence for diagnosis and treatment of early and locally advanced esophagogastric cancers. Here, we compare this guideline with other national guidelines as well as current literature. METHODS: The German S3-guideline used an approved development process with de novo literature research, international guideline adaptation, or good clinical practice. Other recent evidence-based national guidelines and current references were compared with German recommendations. RESULTS: In the German S3 and other Western guidelines, adenocarcinomas of the esophagogastric junction (AEG) are classified according to formerly defined AEG I-III subgroups due to the high surgical impact. To stage local disease, computed tomography of the chest and abdomen and endosonography are reinforced. In contrast, laparoscopy is optional for staging. Mucosal cancers (T1a) should be endoscopically resected "en-bloc" to allow complete histological evaluation of lateral and basal margins. For locally advanced cancers of the stomach or esophagogastric junction (≥T3N+), preferred treatment is preoperative and postoperative chemotherapy. Preoperative radiochemotherapy is an evidence-based alternative for large AEG type I-II tumors (≥T3N+). Additionally, some experts recommend treating T2 tumors with a similar approach, mainly because pretherapeutic staging is often considered to be unreliable. CONCLUSIONS: The German S3 guideline represents an up-to-date European position with regard to diagnosis, staging, and treatment recommendations for patients with locally advanced esophagogastric cancer. Effects of perioperative chemotherapy versus chemoradiotherapy are still to be investigated for adenocarcinoma of the cardia and the lower esophagus.
Resumo:
The present study was performed in an attempt to develop an in vitro integrated testing strategy (ITS) to evaluate drug-induced neurotoxicity. A number of endpoints were analyzed using two complementary brain cell culture models and an in vitro blood-brain barrier (BBB) model after single and repeated exposure treatments with selected drugs that covered the major biological, pharmacological and neuro-toxicological responses. Furthermore, four drugs (diazepam, cyclosporine A, chlorpromazine and amiodarone) were tested more in depth as representatives of different classes of neurotoxicants, inducing toxicity through different pathways of toxicity. The developed in vitro BBB model allowed detection of toxic effects at the level of BBB and evaluation of drug transport through the barrier for predicting free brain concentrations of the studied drugs. The measurement of neuronal electrical activity was found to be a sensitive tool to predict the neuroactivity and neurotoxicity of drugs after acute exposure. The histotypic 3D re-aggregating brain cell cultures, containing all brain cell types, were found to be well suited for OMICs analyses after both acute and long term treatment. The obtained data suggest that an in vitro ITS based on the information obtained from BBB studies and combined with metabolomics, proteomics and neuronal electrical activity measurements performed in stable in vitro neuronal cell culture systems, has high potential to improve current in vitro drug-induced neurotoxicity evaluation.
Resumo:
Wastewater-based epidemiology consists in acquiring relevant information about the lifestyle and health status of the population through the analysis of wastewater samples collected at the influent of a wastewater treatment plant. Whilst being a very young discipline, it has experienced an astonishing development since its firs application in 2005. The possibility to gather community-wide information about drug use has been among the major field of application. The wide resonance of the first results sparked the interest of scientists from various disciplines. Since then, research has broadened in innumerable directions. Although being praised as a revolutionary approach, there was a need to critically assess its added value, with regard to the existing indicators used to monitor illicit drug use. The main, and explicit, objective of this research was to evaluate the added value of wastewater-based epidemiology with regards to two particular, although interconnected, dimensions of illicit drug use. The first is related to trying to understand the added value of the discipline from an epidemiological, or societal, perspective. In other terms, to evaluate if and how it completes our current vision about the extent of illicit drug use at the population level, and if it can guide the planning of future prevention measures and drug policies. The second dimension is the criminal one, with a particular focus on the networks which develop around the large demand in illicit drugs. The goal here was to assess if wastewater-based epidemiology, combined to indicators stemming from the epidemiological dimension, could provide additional clues about the structure of drug distribution networks and the size of their market. This research had also an implicit objective, which focused on initiating the path of wastewater- based epidemiology at the Ecole des Sciences Criminelles of the University of Lausanne. This consisted in gathering the necessary knowledge about the collection, preparation, and analysis of wastewater samples and, most importantly, to understand how to interpret the acquired data and produce useful information. In the first phase of this research, it was possible to determine that ammonium loads, measured directly in the wastewater stream, could be used to monitor the dynamics of the population served by the wastewater treatment plant. Furthermore, it was shown that on the long term, the population did not have a substantial impact on consumption patterns measured through wastewater analysis. Focussing on methadone, for which precise prescription data was available, it was possible to show that reliable consumption estimates could be obtained via wastewater analysis. This allowed to validate the selected sampling strategy, which was then used to monitor the consumption of heroin, through the measurement of morphine. The latter, in combination to prescription and sales data, provided estimates of heroin consumption in line with other indicators. These results, combined to epidemiological data, highlighted the good correspondence between measurements and expectations and, furthermore, suggested that the dark figure of heroin users evading harm-reduction programs, which would thus not be measured by conventional indicators, is likely limited. In the third part, which consisted in a collaborative study aiming at extensively investigating geographical differences in drug use, wastewater analysis was shown to be a useful complement to existing indicators. In particular for stigmatised drugs, such as cocaine and heroin, it allowed to decipher the complex picture derived from surveys and crime statistics. Globally, it provided relevant information to better understand the drug market, both from an epidemiological and repressive perspective. The fourth part focused on cannabis and on the potential of combining wastewater and survey data to overcome some of their respective limitations. Using a hierarchical inference model, it was possible to refine current estimates of cannabis prevalence in the metropolitan area of Lausanne. Wastewater results suggested that the actual prevalence is substantially higher compared to existing figures, thus supporting the common belief that surveys tend to underestimate cannabis use. Whilst being affected by several biases, the information collected through surveys allowed to overcome some of the limitations linked to the analysis of cannabis markers in wastewater (i.e., stability and limited excretion data). These findings highlighted the importance and utility of combining wastewater-based epidemiology to existing indicators about drug use. Similarly, the fifth part of the research was centred on assessing the potential uses of wastewater-based epidemiology from a law enforcement perspective. Through three concrete examples, it was shown that results from wastewater analysis can be used to produce highly relevant intelligence, allowing drug enforcement to assess the structure and operations of drug distribution networks and, ultimately, guide their decisions at the tactical and/or operational level. Finally, the potential to implement wastewater-based epidemiology to monitor the use of harmful, prohibited and counterfeit pharmaceuticals was illustrated through the analysis of sibutramine, and its urinary metabolite, in wastewater samples. The results of this research have highlighted that wastewater-based epidemiology is a useful and powerful approach with numerous scopes. Faced with the complexity of measuring a hidden phenomenon like illicit drug use, it is a major addition to the panoply of existing indicators. -- L'épidémiologie basée sur l'analyse des eaux usées (ou, selon sa définition anglaise, « wastewater-based epidemiology ») consiste en l'acquisition d'informations portant sur le mode de vie et l'état de santé d'une population via l'analyse d'échantillons d'eaux usées récoltés à l'entrée des stations d'épuration. Bien qu'il s'agisse d'une discipline récente, elle a vécu des développements importants depuis sa première mise en oeuvre en 2005, notamment dans le domaine de l'analyse des résidus de stupéfiants. Suite aux retombées médiatiques des premiers résultats de ces analyses de métabolites dans les eaux usées, de nombreux scientifiques provenant de différentes disciplines ont rejoint les rangs de cette nouvelle discipline en développant plusieurs axes de recherche distincts. Bien que reconnu pour son coté objectif et révolutionnaire, il était nécessaire d'évaluer sa valeur ajoutée en regard des indicateurs couramment utilisés pour mesurer la consommation de stupéfiants. En se focalisant sur deux dimensions spécifiques de la consommation de stupéfiants, l'objectif principal de cette recherche était focalisé sur l'évaluation de la valeur ajoutée de l'épidémiologie basée sur l'analyse des eaux usées. La première dimension abordée était celle épidémiologique ou sociétale. En d'autres termes, il s'agissait de comprendre si et comment l'analyse des eaux usées permettait de compléter la vision actuelle sur la problématique, ainsi que déterminer son utilité dans la planification des mesures préventives et des politiques en matière de stupéfiants actuelles et futures. La seconde dimension abordée était celle criminelle, en particulier, l'étude des réseaux qui se développent autour du trafic de produits stupéfiants. L'objectif était de déterminer si cette nouvelle approche combinée aux indicateurs conventionnels, fournissait de nouveaux indices quant à la structure et l'organisation des réseaux de distribution ainsi que sur les dimensions du marché. Cette recherche avait aussi un objectif implicite, développer et d'évaluer la mise en place de l'épidémiologie basée sur l'analyse des eaux usées. En particulier, il s'agissait d'acquérir les connaissances nécessaires quant à la manière de collecter, traiter et analyser des échantillons d'eaux usées, mais surtout, de comprendre comment interpréter les données afin d'en extraire les informations les plus pertinentes. Dans la première phase de cette recherche, il y pu être mis en évidence que les charges en ammonium, mesurées directement dans les eaux usées permettait de suivre la dynamique des mouvements de la population contributrice aux eaux usées de la station d'épuration de la zone étudiée. De plus, il a pu être démontré que, sur le long terme, les mouvements de la population n'avaient pas d'influence substantielle sur le pattern de consommation mesuré dans les eaux usées. En se focalisant sur la méthadone, une substance pour laquelle des données précises sur le nombre de prescriptions étaient disponibles, il a pu être démontré que des estimations exactes sur la consommation pouvaient être tirées de l'analyse des eaux usées. Ceci a permis de valider la stratégie d'échantillonnage adoptée, qui, par le bais de la morphine, a ensuite été utilisée pour suivre la consommation d'héroïne. Combinée aux données de vente et de prescription, l'analyse de la morphine a permis d'obtenir des estimations sur la consommation d'héroïne en accord avec des indicateurs conventionnels. Ces résultats, combinés aux données épidémiologiques ont permis de montrer une bonne adéquation entre les projections des deux approches et ainsi démontrer que le chiffre noir des consommateurs qui échappent aux mesures de réduction de risque, et qui ne seraient donc pas mesurés par ces indicateurs, est vraisemblablement limité. La troisième partie du travail a été réalisée dans le cadre d'une étude collaborative qui avait pour but d'investiguer la valeur ajoutée de l'analyse des eaux usées à mettre en évidence des différences géographiques dans la consommation de stupéfiants. En particulier pour des substances stigmatisées, telles la cocaïne et l'héroïne, l'approche a permis d'objectiver et de préciser la vision obtenue avec les indicateurs traditionnels du type sondages ou les statistiques policières. Globalement, l'analyse des eaux usées s'est montrée être un outil très utile pour mieux comprendre le marché des stupéfiants, à la fois sous l'angle épidémiologique et répressif. La quatrième partie du travail était focalisée sur la problématique du cannabis ainsi que sur le potentiel de combiner l'analyse des eaux usées aux données de sondage afin de surmonter, en partie, leurs limitations. En utilisant un modèle d'inférence hiérarchique, il a été possible d'affiner les actuelles estimations sur la prévalence de l'utilisation de cannabis dans la zone métropolitaine de la ville de Lausanne. Les résultats ont démontré que celle-ci est plus haute que ce que l'on s'attendait, confirmant ainsi l'hypothèse que les sondages ont tendance à sous-estimer la consommation de cannabis. Bien que biaisés, les données récoltées par les sondages ont permis de surmonter certaines des limitations liées à l'analyse des marqueurs du cannabis dans les eaux usées (i.e., stabilité et manque de données sur l'excrétion). Ces résultats mettent en évidence l'importance et l'utilité de combiner les résultats de l'analyse des eaux usées aux indicateurs existants. De la même façon, la cinquième partie du travail était centrée sur l'apport de l'analyse des eaux usées du point de vue de la police. Au travers de trois exemples, l'utilisation de l'indicateur pour produire du renseignement concernant la structure et les activités des réseaux de distribution de stupéfiants, ainsi que pour guider les choix stratégiques et opérationnels de la police, a été mise en évidence. Dans la dernière partie, la possibilité d'utiliser cette approche pour suivre la consommation de produits pharmaceutiques dangereux, interdits ou contrefaits, a été démontrée par l'analyse dans les eaux usées de la sibutramine et ses métabolites. Les résultats de cette recherche ont mis en évidence que l'épidémiologie par l'analyse des eaux usées est une approche pertinente et puissante, ayant de nombreux domaines d'application. Face à la complexité de mesurer un phénomène caché comme la consommation de stupéfiants, la valeur ajoutée de cette approche a ainsi pu être démontrée.
Resumo:
One of the global targets for non-communicable diseases is to halt, by 2025, the rise in the age-standardised adult prevalence of diabetes at its 2010 levels. We aimed to estimate worldwide trends in diabetes, how likely it is for countries to achieve the global target, and how changes in prevalence, together with population growth and ageing, are affecting the number of adults with diabetes. We pooled data from population-based studies that had collected data on diabetes through measurement of its biomarkers. We used a Bayesian hierarchical model to estimate trends in diabetes prevalence-defined as fasting plasma glucose of 7.0 mmol/L or higher, or history of diagnosis with diabetes, or use of insulin or oral hypoglycaemic drugs-in 200 countries and territories in 21 regions, by sex and from 1980 to 2014. We also calculated the posterior probability of meeting the global diabetes target if post-2000 trends continue. We used data from 751 studies including 4,372,000 adults from 146 of the 200 countries we make estimates for. Global age-standardised diabetes prevalence increased from 4.3% (95% credible interval 2.4-7.0) in 1980 to 9.0% (7.2-11.1) in 2014 in men, and from 5.0% (2.9-7.9) to 7.9% (6.4-9.7) in women. The number of adults with diabetes in the world increased from 108 million in 1980 to 422 million in 2014 (28.5% due to the rise in prevalence, 39.7% due to population growth and ageing, and 31.8% due to interaction of these two factors). Age-standardised adult diabetes prevalence in 2014 was lowest in northwestern Europe, and highest in Polynesia and Micronesia, at nearly 25%, followed by Melanesia and the Middle East and north Africa. Between 1980 and 2014 there was little change in age-standardised diabetes prevalence in adult women in continental western Europe, although crude prevalence rose because of ageing of the population. By contrast, age-standardised adult prevalence rose by 15 percentage points in men and women in Polynesia and Micronesia. In 2014, American Samoa had the highest national prevalence of diabetes (>30% in both sexes), with age-standardised adult prevalence also higher than 25% in some other islands in Polynesia and Micronesia. If post-2000 trends continue, the probability of meeting the global target of halting the rise in the prevalence of diabetes by 2025 at the 2010 level worldwide is lower than 1% for men and is 1% for women. Only nine countries for men and 29 countries for women, mostly in western Europe, have a 50% or higher probability of meeting the global target. Since 1980, age-standardised diabetes prevalence in adults has increased, or at best remained unchanged, in every country. Together with population growth and ageing, this rise has led to a near quadrupling of the number of adults with diabetes worldwide. The burden of diabetes, both in terms of prevalence and number of adults affected, has increased faster in low-income and middle-income countries than in high-income countries. Wellcome Trust.
Resumo:
The most suitable method for estimation of size diversity is investigated. Size diversity is computed on the basis of the Shannon diversity expression adapted for continuous variables, such as size. It takes the form of an integral involving the probability density function (pdf) of the size of the individuals. Different approaches for the estimation of pdf are compared: parametric methods, assuming that data come from a determinate family of pdfs, and nonparametric methods, where pdf is estimated using some kind of local evaluation. Exponential, generalized Pareto, normal, and log-normal distributions have been used to generate simulated samples using estimated parameters from real samples. Nonparametric methods include discrete computation of data histograms based on size intervals and continuous kernel estimation of pdf. Kernel approach gives accurate estimation of size diversity, whilst parametric methods are only useful when the reference distribution have similar shape to the real one. Special attention is given for data standardization. The division of data by the sample geometric mean is proposedas the most suitable standardization method, which shows additional advantages: the same size diversity value is obtained when using original size or log-transformed data, and size measurements with different dimensionality (longitudes, areas, volumes or biomasses) may be immediately compared with the simple addition of ln k where kis the dimensionality (1, 2, or 3, respectively). Thus, the kernel estimation, after data standardization by division of sample geometric mean, arises as the most reliable and generalizable method of size diversity evaluation
Resumo:
The objective of this master’s thesis was to develop a model for mobile subscription acquisition cost, SAC, and mobile subscription retention cost, SRC, by applying activity-based cost accounting principles. The thesis was conducted as a case study for a telecommunication company operating on the Finnish telecommunication market. In addition to activity-based cost accounting there were other theories studied and applied in order to establish a theory framework for this thesis. The concepts of acquisition and retention were explored in a broader context with the concepts of customer satisfaction, loyalty and profitability and eventually customer relationship management to understand the background and meaning of the theme of this thesis. The utilization of SAC and SRC information is discussed through the theories of decision making and activity-based management. Also, the present state and future needs of SAC and SRC information usage at the case company as well as the functions of the company were examined by interviewing some members of the company personnel. With the help of these theories and methods it was aimed at finding out both the theory-based and practical factors which affect the structure of the model. During the thesis study it was confirmed that the existing SAC and SRC model of the case company should be used as the basis in developing the activity-based model. As a result the indirect costs of the old model were transformed into activities and the direct costs were continued to be allocated directly to acquisition of new subscriptions and retention of old subscriptions. The refined model will enable managing the subscription acquisition, retention and the related costs better through the activity information. During the interviews it was found out that the SAC and SRC information is also used in performance measurement and operational and strategic planning. SAC and SRC are not fully absorbed costs and it was concluded that the model serves best as a source of indicative cost information. This thesis does not include calculating costs. Instead, the refined model together with both the theory-based and interview findings concerning the utilization of the information produced by the model will serve as a framework for the possible future development aiming at completing the model.
Resumo:
Induction motors are widely used in industry, and they are generally considered very reliable. They often have a critical role in industrial processes, and their failure can lead to significant losses as a result of shutdown times. Typical failures of induction motors can be classified into stator, rotor, and bearing failures. One of the reasons for a bearing damage and eventually a bearing failure is bearing currents. Bearing currents in induction motors can be divided into two main categories; classical bearing currents and inverter-induced bearing currents. A bearing damage caused by bearing currents results, for instance, from electrical discharges that take place through the lubricant film between the raceways of the inner and the outer ring and the rolling elements of a bearing. This phenomenon can be considered similar to the one of electrical discharge machining, where material is removed by a series of rapidly recurring electrical arcing discharges between an electrode and a workpiece. This thesis concentrates on bearing currents with a special reference to bearing current detection in induction motors. A bearing current detection method based on radio frequency impulse reception and detection is studied. The thesis describes how a motor can work as a “spark gap” transmitter and discusses a discharge in a bearing as a source of radio frequency impulse. It is shown that a discharge, occurring due to bearing currents, can be detected at a distance of several meters from the motor. The issues of interference, detection, and location techniques are discussed. The applicability of the method is shown with a series of measurements with a specially constructed test motor and an unmodified frequency-converter-driven motor. The radio frequency method studied provides a nonintrusive method to detect harmful bearing currents in the drive system. If bearing current mitigation techniques are applied, their effectiveness can be immediately verified with the proposed method. The method also gives a tool to estimate the harmfulness of the bearing currents by making it possible to detect and locate individual discharges inside the bearings of electric motors.
Resumo:
This work proposes a fully-digital interface circuit for the measurement of inductive sensors using a low-cost microcontroller (µC) and without any intermediate active circuit. Apart from the µC and the sensor, the circuit just requires an external resistor and a reference inductance so that two RL circuits with a high-pass filter (HPF) topology are formed. The µC appropriately excites such RL circuits in order to measure the discharging time of the voltage across each inductance (i.e. sensing and reference) and then it uses such discharging times to estimate the sensor inductance. Experimental tests using a commercial µC show a non-linearity error (NLE) lower than 0.5%FSS (Full-Scale Span) when measuring inductances from 1 mH to 10 mH, and from 10 mH to 100 mH.
Resumo:
Aims:This study was carried out to evaluate the feasibility of two different methods to determine free flap perfusion in cancer patients undergoing major reconstructive surgery. The hypotheses was that low perfusion in the flap is associated with flap complications. Patients and methods: Between August 2002 and June 2008 at the Department of Otorhinolaryngology – Head and Neck Surgery, Department of Surgery, and at the PET Centre, Turku, 30 consecutive patients with 32 free flaps were included in this study. The perfusion of the free microvascular flaps was assessed with positron emission tomography (PET) and radioactive water ([15O] H2O) in 40 radiowater injections in 33 PET studies. Furthermore, 24 free flaps were monitored with a continuous tissue oxygen measurement using flexible polarographic catheters for an average of three postoperative days. Results: Of the 17 patients operated on for head and neck (HN) cancer and reconstructed with 18 free flaps, three re-operations were carried out due to poor tissue oxygenation as indicated by ptiO2 monitoring results and three other patients were reoperated on for postoperative hematomas in the operated area. Blood perfusion assessed with PET (BFPET) was above 2.0 mL / min / 100 g in all flaps and a low flap-to-muscle BFPET ratio appeared to correlate with poor survival of the flap. Survival in this group of HN cancer patients was 9.0 months (median, range 2.4-34.2) after a median follow-up of 11.9 months (range 1.0-61.0 months). Seven HN patients of this group are alive without any sign of recurrence and one patient has died of other causes. All of the 13 breast reconstruction patients included in the study are alive and free of disease at a median follow-up time of 27.4 months (range 13.9-35.7 months). Re-explorations were carried out in three patients due data provided by ptiO2 monitoring and one re-exploration was avoided on the basis of adequate blood perfusion assessed with PET. Two patients had donorsite morbidity and 3 patients had partial flap necrosis or fat necrosis. There were no total flap losses. Conclusions: PtiO2 monitoring is a feasible method of free flap monitoring when flap temperature is monitored and maintained close to the core temperature. When other monitoring methods give controversial results or are unavailable, [15O] H2O PET technique is feasible in the evaluation of the perfusion of the newly reconstructed free flaps.