890 resultados para Tchebyshev metrics
Resumo:
This thesis was carried out as a case study of a company YIT in order to clarify the sev-erest risks for the company and to build a method for project portfolio evaluation. The target organization creates new living environment by constructing residential buildings, business premises, infrastructure and entire areas worth for EUR 1.9 billion in the year 2013. Company has noted project portfolio management needs more information about the structure of project portfolio and possible influences of market shock situation. With interviews have been evaluated risks with biggest influence and most appropriate metrics to examine. The major risks for the company were evaluated by interviewing the executive staff. At the same time, the most appropriate risk metrics were considered. At the moment sales risk was estimated to have biggest impact on company‟s business. Therefore project port-folio evaluation model was created and three different scenarios for company‟s future were created in order to identify the scale of possible market shock situation. The created model is tested with public and descriptive figures of YIT in a one-year-long market shock and the impact on different metrics was evaluated. Study was conducted using con-structive research methodology. Results indicate that company has notable sales risk in certain sections of business portfolio.
Resumo:
In this work, the feasibility of the floating-gate technology in analog computing platforms in a scaled down general-purpose CMOS technology is considered. When the technology is scaled down the performance of analog circuits tends to get worse because the process parameters are optimized for digital transistors and the scaling involves the reduction of supply voltages. Generally, the challenge in analog circuit design is that all salient design metrics such as power, area, bandwidth and accuracy are interrelated. Furthermore, poor flexibility, i.e. lack of reconfigurability, the reuse of IP etc., can be considered the most severe weakness of analog hardware. On this account, digital calibration schemes are often required for improved performance or yield enhancement, whereas high flexibility/reconfigurability can not be easily achieved. Here, it is discussed whether it is possible to work around these obstacles by using floating-gate transistors (FGTs), and analyze problems associated with the practical implementation. FGT technology is attractive because it is electrically programmable and also features a charge-based built-in non-volatile memory. Apart from being ideal for canceling the circuit non-idealities due to process variations, the FGTs can also be used as computational or adaptive elements in analog circuits. The nominal gate oxide thickness in the deep sub-micron (DSM) processes is too thin to support robust charge retention and consequently the FGT becomes leaky. In principle, non-leaky FGTs can be implemented in a scaled down process without any special masks by using “double”-oxide transistors intended for providing devices that operate with higher supply voltages than general purpose devices. However, in practice the technology scaling poses several challenges which are addressed in this thesis. To provide a sufficiently wide-ranging survey, six prototype chips with varying complexity were implemented in four different DSM process nodes and investigated from this perspective. The focus is on non-leaky FGTs, but the presented autozeroing floating-gate amplifier (AFGA) demonstrates that leaky FGTs may also find a use. The simplest test structures contain only a few transistors, whereas the most complex experimental chip is an implementation of a spiking neural network (SNN) which comprises thousands of active and passive devices. More precisely, it is a fully connected (256 FGT synapses) two-layer spiking neural network (SNN), where the adaptive properties of FGT are taken advantage of. A compact realization of Spike Timing Dependent Plasticity (STDP) within the SNN is one of the key contributions of this thesis. Finally, the considerations in this thesis extend beyond CMOS to emerging nanodevices. To this end, one promising emerging nanoscale circuit element - memristor - is reviewed and its applicability for analog processing is considered. Furthermore, it is discussed how the FGT technology can be used to prototype computation paradigms compatible with these emerging two-terminal nanoscale devices in a mature and widely available CMOS technology.
Resumo:
In recent decade customer loyalty programs have become very popular and almost every retail chain seems to have one. Through the loyalty programs companies are able to collect information about the customer behavior and to use this information in business and marketing management to guide decision making and resource allocation. The benefits for the loyalty program member are often monetary, which has an effect on the profitability of the loyalty program. Not all the loyalty program members are equally profitable, as some purchase products for the recommended retail price and some buy only discounted products. If the company spends similar amount of resources to all members, it can be seen that the customer margin is lower on the customer who bought only discounted products. It is vital for a company to measure the profitability of their members in order to be able to calculate the customer value. To calculate the customer value several different customer value metrics can be used. During the recent years especially customer lifetime value has received a lot of attention and it is seen to be superior against other customer value metrics. In this master’s thesis the customer lifetime value is implemented on the case company’s customer loyalty program. The data was collected from the customer loyalty program’s database and represents year 2012 on the Finnish market. The data was not complete to fully take advantage of customer lifetime value and as a conclusion it can be stated that a new key performance indicator of customer margin should be acquired in order to profitably drive the business of the customer loyalty program. Through the customer margin the company would be able to compute the customer lifetime value on regular basis enabling efficient resource allocation in marketing.
Resumo:
Bioanalytical data from a bioequivalence study were used to develop limited-sampling strategy (LSS) models for estimating the area under the plasma concentration versus time curve (AUC) and the peak plasma concentration (Cmax) of 4-methylaminoantipyrine (MAA), an active metabolite of dipyrone. Twelve healthy adult male volunteers received single 600 mg oral doses of dipyrone in two formulations at a 7-day interval in a randomized, crossover protocol. Plasma concentrations of MAA (N = 336), measured by HPLC, were used to develop LSS models. Linear regression analysis and a "jack-knife" validation procedure revealed that the AUC0-¥ and the Cmax of MAA can be accurately predicted (R²>0.95, bias <1.5%, precision between 3.1 and 8.3%) by LSS models based on two sampling times. Validation tests indicate that the most informative 2-point LSS models developed for one formulation provide good estimates (R²>0.85) of the AUC0-¥ or Cmax for the other formulation. LSS models based on three sampling points (1.5, 4 and 24 h), but using different coefficients for AUC0-¥ and Cmax, predicted the individual values of both parameters for the enrolled volunteers (R²>0.88, bias = -0.65 and -0.37%, precision = 4.3 and 7.4%) as well as for plasma concentration data sets generated by simulation (R²>0.88, bias = -1.9 and 8.5%, precision = 5.2 and 8.7%). Bioequivalence assessment of the dipyrone formulations based on the 90% confidence interval of log-transformed AUC0-¥ and Cmax provided similar results when either the best-estimated or the LSS-derived metrics were used.
Resumo:
Tämän kandidaatintyön tavoitteena on selvittää, miten suorituskykyä mitataan toimitusketjussa ja miten saatuja tuloksia voidaan käyttää hyväksi toiminnan kehittämisessä. Tutkielma on toteutettu kirjallisuustyönä. Esitettyjen tietojen ja tulosten pohjana on alan kirjallisuus sekä julkaistut artikkelit. Työssä esitellään toimitusketjun suorituskyvyn kannalta oleelliset mittauksen kohteet sekä näiden mittaamiseen soveltuvia yleisimpiä mittareita ja valmiita mittaristomalleja. Lisäksi työssä selvitetään, mitä toimitusketjuun kuuluvien osapuolten tulee huomioida mittaamisen suunnittelu- ja implementointiprojekteissa sekä miten mittauksesta saatuja tuloksia voidaan hyödyntää toimitusketjun suorituskyvyn parantamiseksi. Tutkimuksessa selvisi, että toimitusketjun suorituskyvyn mittaamiseen on kehitetty valtava määrä mittareita ja mittarimalleja, joista tulisi kuitenkin valita tapauskohtaisesti vain muutamia, joille asetetaan tavoitearvot ja joiden kehittymistä seurataan säännöllisesti. Toimitusketjun suorituskykyä kannattaa mitata, koska se mahdollistaa informaatioon perustuvan päätöksenteon ja johtaa parempaan kilpailukykyyn.
Resumo:
Työn tavoitteena on selvittää, miten ulkoistettua myyntiorganisaatiota tulisi mitata ja ohjata. Työn alkuosa keskittyy kirjallisuuslähteiden pohjalta hankittuun tietoon myynnin tavoitteista, ohjaamisesta ja mittaamisesta. Työ toteutettiin case-tutkimuksena ja tieto case-yrityksestä hankittiin ensisijaisesti haastattelujen avulla. Nykytilaa analysoitiin ja lopuksi esiteltiin erilaisia ratkaisuvaihtoehtoja. Myynnin ohjaamisen ja mittaamisen lähtökohtana ovat tavoitteet ja myyntistrategia. Kun suorituskykymittaristoa kehitetään, tulisi sen huomioida eri näkökulmat ja niiden tarpeet. Ulkoistetun verkoston toimijoiden erityispiirteet tulisi huomioida tavoitteissa ja mittareissa, eikä yksi tavoitemuotti sovi kaikille verkoston osapuolille. Mittariston tulee huomioida eri lähestymistavat, ja sen takia mittariston tulisi huomioida taloudelliset tekijät, markkinatekijät, asiakkaat, työntekijät ja tulevaisuus. Suorituskykymittaristo ja tavoitteet ovat tärkeä osa ohjaamista, mutta ohjaamisen toinen keskeinen osa on aineettomat motivaatiotekijät, kuten myyntisuunnittelu ja avoimuus, ja niiden kehittäminen.
Resumo:
Almost every problem of design, planning and management in the technical and organizational systems has several conflicting goals or interests. Nowadays, multicriteria decision models represent a rapidly developing area of operation research. While solving practical optimization problems, it is necessary to take into account various kinds of uncertainty due to lack of data, inadequacy of mathematical models to real-time processes, calculation errors, etc. In practice, this uncertainty usually leads to undesirable outcomes where the solutions are very sensitive to any changes in the input parameters. An example is the investment managing. Stability analysis of multicriteria discrete optimization problems investigates how the found solutions behave in response to changes in the initial data (input parameters). This thesis is devoted to the stability analysis in the problem of selecting investment project portfolios, which are optimized by considering different types of risk and efficiency of the investment projects. The stability analysis is carried out in two approaches: qualitative and quantitative. The qualitative approach describes the behavior of solutions in conditions with small perturbations in the initial data. The stability of solutions is defined in terms of existence a neighborhood in the initial data space. Any perturbed problem from this neighborhood has stability with respect to the set of efficient solutions of the initial problem. The other approach in the stability analysis studies quantitative measures such as stability radius. This approach gives information about the limits of perturbations in the input parameters, which do not lead to changes in the set of efficient solutions. In present thesis several results were obtained including attainable bounds for the stability radii of Pareto optimal and lexicographically optimal portfolios of the investment problem with Savage's, Wald's criteria and criteria of extreme optimism. In addition, special classes of the problem when the stability radii are expressed by the formulae were indicated. Investigations were completed using different combinations of Chebyshev's, Manhattan and Hölder's metrics, which allowed monitoring input parameters perturbations differently.
Resumo:
In this research, the effectiveness of Naive Bayes and Gaussian Mixture Models classifiers on segmenting exudates in retinal images is studied and the results are evaluated with metrics commonly used in medical imaging. Also, a color variation analysis of retinal images is carried out to find how effectively can retinal images be segmented using only the color information of the pixels.
Resumo:
The objective of this thesis is to develop and generalize further the differential evolution based data classification method. For many years, evolutionary algorithms have been successfully applied to many classification tasks. Evolution algorithms are population based, stochastic search algorithms that mimic natural selection and genetics. Differential evolution is an evolutionary algorithm that has gained popularity because of its simplicity and good observed performance. In this thesis a differential evolution classifier with pool of distances is proposed, demonstrated and initially evaluated. The differential evolution classifier is a nearest prototype vector based classifier that applies a global optimization algorithm, differential evolution, to determine the optimal values for all free parameters of the classifier model during the training phase of the classifier. The differential evolution classifier applies the individually optimized distance measure for each new data set to be classified is generalized to cover a pool of distances. Instead of optimizing a single distance measure for the given data set, the selection of the optimal distance measure from a predefined pool of alternative measures is attempted systematically and automatically. Furthermore, instead of only selecting the optimal distance measure from a set of alternatives, an attempt is made to optimize the values of the possible control parameters related with the selected distance measure. Specifically, a pool of alternative distance measures is first created and then the differential evolution algorithm is applied to select the optimal distance measure that yields the highest classification accuracy with the current data. After determining the optimal distance measures for the given data set together with their optimal parameters, all determined distance measures are aggregated to form a single total distance measure. The total distance measure is applied to the final classification decisions. The actual classification process is still based on the nearest prototype vector principle; a sample belongs to the class represented by the nearest prototype vector when measured with the optimized total distance measure. During the training process the differential evolution algorithm determines the optimal class vectors, selects optimal distance metrics, and determines the optimal values for the free parameters of each selected distance measure. The results obtained with the above method confirm that the choice of distance measure is one of the most crucial factors for obtaining higher classification accuracy. The results also demonstrate that it is possible to build a classifier that is able to select the optimal distance measure for the given data set automatically and systematically. After finding optimal distance measures together with optimal parameters from the particular distance measure results are then aggregated to form a total distance, which will be used to form the deviation between the class vectors and samples and thus classify the samples. This thesis also discusses two types of aggregation operators, namely, ordered weighted averaging (OWA) based multi-distances and generalized ordered weighted averaging (GOWA). These aggregation operators were applied in this work to the aggregation of the normalized distance values. The results demonstrate that a proper combination of aggregation operator and weight generation scheme play an important role in obtaining good classification accuracy. The main outcomes of the work are the six new generalized versions of previous method called differential evolution classifier. All these DE classifier demonstrated good results in the classification tasks.
Resumo:
Background: Recent recommendations aim to improve cardiovascular health (CVH) by encouraging the general population to meet positive and modifiable ideal CVH metrics: not smoking, being physically active, and maintaining normal weight, blood pressure, blood glucose and total cholesterol levels and a healthy diet. Aims: The aim of the present study was to report the prevalence of ideal CVH in children and young adults and study the associations of CVH metrics with markers of subclinical atherosclerosis. Participants and methods: The present thesis is part of the Cardiovascular Risk in Young Finns Study (Young Finns Study). Data on associations of CVH metrics and subclinical atherosclerosis were available from 1,898 Young Finns Study participants. In addition, joint analyses were performed combining data from the International Childhood Cardiovascular Cohort (i3C) Consortium member studies from Australia and the USA. Results: None of the participants met all 7 CVH metrics and thus had ideal CVH in childhood and only 1% had ideal CVH as young adults. The number of CVH metrics present in childhood and adulthood predicted lower carotid artery intima-media thickness, improved carotid artery distensibility and lower risk of coronary artery calcification. Those who improved their CVH status from childhood to adulthood had a comparable risk of subclinical atherosclerosis to participants who had always had a high CVH status. Conclusions: Ideal CVH proved to be rare among children and young adults. A higher number of ideal CVH metrics and improvement of CVH status between childhood and adulthood predicted a lower risk of subclinical atherosclerosis.
Resumo:
This work goes through the concept of usability in general and healthcare, especially prenatal healthcare, context. Different frameworks and guidelines used to measure it are considered. A collection of metrics is suggested to be used at a prenatal unit of one Finnish healthcare district. The metrics consist of a set of 12 general measures and a supplementary System Usability Scale questionnaire including a Fun Toolkit Smileyometer. The metrics are tested in real life work situations by observing meetings with patients and presenting the questionnaire for the focus group personnel. A total of 6 focus group patient meetings were observed. This work suggests that in order to get more conclusive data from the metrics the focus groups need to be more involved and observation situations need to be more controlled. Revised metrics consist of the 12 general measures.
Resumo:
Feature extraction is the part of pattern recognition, where the sensor data is transformed into a more suitable form for the machine to interpret. The purpose of this step is also to reduce the amount of information passed to the next stages of the system, and to preserve the essential information in the view of discriminating the data into different classes. For instance, in the case of image analysis the actual image intensities are vulnerable to various environmental effects, such as lighting changes and the feature extraction can be used as means for detecting features, which are invariant to certain types of illumination changes. Finally, classification tries to make decisions based on the previously transformed data. The main focus of this thesis is on developing new methods for the embedded feature extraction based on local non-parametric image descriptors. Also, feature analysis is carried out for the selected image features. Low-level Local Binary Pattern (LBP) based features are in a main role in the analysis. In the embedded domain, the pattern recognition system must usually meet strict performance constraints, such as high speed, compact size and low power consumption. The characteristics of the final system can be seen as a trade-off between these metrics, which is largely affected by the decisions made during the implementation phase. The implementation alternatives of the LBP based feature extraction are explored in the embedded domain in the context of focal-plane vision processors. In particular, the thesis demonstrates the LBP extraction with MIPA4k massively parallel focal-plane processor IC. Also higher level processing is incorporated to this framework, by means of a framework for implementing a single chip face recognition system. Furthermore, a new method for determining optical flow based on LBPs, designed in particular to the embedded domain is presented. Inspired by some of the principles observed through the feature analysis of the Local Binary Patterns, an extension to the well known non-parametric rank transform is proposed, and its performance is evaluated in face recognition experiments with a standard dataset. Finally, an a priori model where the LBPs are seen as combinations of n-tuples is also presented
Resumo:
To study the effect of age on the metrics of upper and lower eyelid saccades, eyelid movement of two groups of 30 subjects each were measured using computed image analysis. The patients were divided on the basis of age into a younger group (20-30 years) and an older group (60-91 years). Eyelid saccade functions were fitted by the damped harmonic oscillator model. Amplitude and peak velocity were used to compare the effect of age on the saccades of the upper and lower eyelid. There was no statistically significant difference in saccade amplitude between groups for the upper eyelid (mean ± SEM; upward, young = 9.18 ± 0.32 mm, older = 8.93 ± 0.31 mm, t = 0.56, P = 0.58; downward, young = 9.11 ± 0.27 mm, older = 8.86 ± 0.32 mm, t = 0.58, P = 0.56) However, there was a clear decline in the peak velocity of the upper eyelid saccades of older subjects (upward, young = 59.06 ± 2.34 mm/s, older = 50.12 ± 1.95 mm/s, t = 2.93, P = 0.005; downward, young = 71.78 ± 1.78 mm/s, older = 60.29 ± 2.62 mm/s, t = 3.63, P = 0.0006). In contrast, for the lower eyelid there was a clear increase of saccade amplitude in the elderly group (upward, young = 2.27 ± 0.09 mm, older = 2.98 ± 0.15 mm, t = 4.33, P < 0.0001; downward, young = 2.21 ± 0.10 mm, older = 2.96 ± 0.17 mm, t = 3.85, P < 0.001). These data suggest that the aging process affects the metrics of the lid saccades in a different manner according to the eyelid. In the upper eyelid the lower tension exerted by a weak aponeurosis is reflected only on the peak velocity of the saccades. In the lower eyelid, age is accompanied by an increase in saccade amplitude which indicates that the force transmission to the lid is not affected in the elderly.
Resumo:
The purpose of this research was to define content marketing and to discover how content marketing performance can be measured especially on YouTube. Further, the aim was to find out what companies are doing to measure content marketing and what kind of challenges they face in the process. In addition, preferences concerning the measurement were examined. The empirical part was conducted through multiple-case study and cross-case analysis methods. The qualitative data was collected from four large companies in Finnish food and drink industry through semi-structured phone interviews. As a result of this research, a new definition for content marketing was derived. It is suggested that return on objective, or in this case, brand awareness and engagement are used as the main metrics of content marketing performance on YouTube. The major challenge is the nature of the industry, as companies cannot connect the outcome directly to sales.
Resumo:
This thesis examines how content marketing is used in B2B customer acquisition and how content marketing performance measurement system is built and utilized in this context. Literature related to performance measurement, branding and buyer behavior is examined in the theoretical part in order to identify the elements influence on content marketing performance measurement design and usage. Qualitative case study is chosen in order to gain deep understanding of the phenomenon studied. The case company is a Finnish software vendor, which operates in B2B markets and has practiced content marketing for approximately two years. The in-depth interviews were conducted with three employees from marketing department. According to findings content marketing performance measurement system’s infrastructure is based on target market’s decision making processes, company’s own customer acquisition process, marketing automation tool and analytics solutions. The main roles of content marketing performance measurement system are measuring performance, strategy management and learning and improvement. Content marketing objectives in the context of customer acquisition are enhancing brand awareness, influencing brand attitude and lead generation. Both non-financial and financial outcomes are assessed by single phase specific metrics, phase specific overall KPIs and ratings related to lead’s involvement.