984 resultados para Kähler-Einstein Metrics
Resumo:
Työn tavoitteena on selvittää, miten ulkoistettua myyntiorganisaatiota tulisi mitata ja ohjata. Työn alkuosa keskittyy kirjallisuuslähteiden pohjalta hankittuun tietoon myynnin tavoitteista, ohjaamisesta ja mittaamisesta. Työ toteutettiin case-tutkimuksena ja tieto case-yrityksestä hankittiin ensisijaisesti haastattelujen avulla. Nykytilaa analysoitiin ja lopuksi esiteltiin erilaisia ratkaisuvaihtoehtoja. Myynnin ohjaamisen ja mittaamisen lähtökohtana ovat tavoitteet ja myyntistrategia. Kun suorituskykymittaristoa kehitetään, tulisi sen huomioida eri näkökulmat ja niiden tarpeet. Ulkoistetun verkoston toimijoiden erityispiirteet tulisi huomioida tavoitteissa ja mittareissa, eikä yksi tavoitemuotti sovi kaikille verkoston osapuolille. Mittariston tulee huomioida eri lähestymistavat, ja sen takia mittariston tulisi huomioida taloudelliset tekijät, markkinatekijät, asiakkaat, työntekijät ja tulevaisuus. Suorituskykymittaristo ja tavoitteet ovat tärkeä osa ohjaamista, mutta ohjaamisen toinen keskeinen osa on aineettomat motivaatiotekijät, kuten myyntisuunnittelu ja avoimuus, ja niiden kehittäminen.
Resumo:
Almost every problem of design, planning and management in the technical and organizational systems has several conflicting goals or interests. Nowadays, multicriteria decision models represent a rapidly developing area of operation research. While solving practical optimization problems, it is necessary to take into account various kinds of uncertainty due to lack of data, inadequacy of mathematical models to real-time processes, calculation errors, etc. In practice, this uncertainty usually leads to undesirable outcomes where the solutions are very sensitive to any changes in the input parameters. An example is the investment managing. Stability analysis of multicriteria discrete optimization problems investigates how the found solutions behave in response to changes in the initial data (input parameters). This thesis is devoted to the stability analysis in the problem of selecting investment project portfolios, which are optimized by considering different types of risk and efficiency of the investment projects. The stability analysis is carried out in two approaches: qualitative and quantitative. The qualitative approach describes the behavior of solutions in conditions with small perturbations in the initial data. The stability of solutions is defined in terms of existence a neighborhood in the initial data space. Any perturbed problem from this neighborhood has stability with respect to the set of efficient solutions of the initial problem. The other approach in the stability analysis studies quantitative measures such as stability radius. This approach gives information about the limits of perturbations in the input parameters, which do not lead to changes in the set of efficient solutions. In present thesis several results were obtained including attainable bounds for the stability radii of Pareto optimal and lexicographically optimal portfolios of the investment problem with Savage's, Wald's criteria and criteria of extreme optimism. In addition, special classes of the problem when the stability radii are expressed by the formulae were indicated. Investigations were completed using different combinations of Chebyshev's, Manhattan and Hölder's metrics, which allowed monitoring input parameters perturbations differently.
Resumo:
In this research, the effectiveness of Naive Bayes and Gaussian Mixture Models classifiers on segmenting exudates in retinal images is studied and the results are evaluated with metrics commonly used in medical imaging. Also, a color variation analysis of retinal images is carried out to find how effectively can retinal images be segmented using only the color information of the pixels.
Resumo:
The objective of this thesis is to develop and generalize further the differential evolution based data classification method. For many years, evolutionary algorithms have been successfully applied to many classification tasks. Evolution algorithms are population based, stochastic search algorithms that mimic natural selection and genetics. Differential evolution is an evolutionary algorithm that has gained popularity because of its simplicity and good observed performance. In this thesis a differential evolution classifier with pool of distances is proposed, demonstrated and initially evaluated. The differential evolution classifier is a nearest prototype vector based classifier that applies a global optimization algorithm, differential evolution, to determine the optimal values for all free parameters of the classifier model during the training phase of the classifier. The differential evolution classifier applies the individually optimized distance measure for each new data set to be classified is generalized to cover a pool of distances. Instead of optimizing a single distance measure for the given data set, the selection of the optimal distance measure from a predefined pool of alternative measures is attempted systematically and automatically. Furthermore, instead of only selecting the optimal distance measure from a set of alternatives, an attempt is made to optimize the values of the possible control parameters related with the selected distance measure. Specifically, a pool of alternative distance measures is first created and then the differential evolution algorithm is applied to select the optimal distance measure that yields the highest classification accuracy with the current data. After determining the optimal distance measures for the given data set together with their optimal parameters, all determined distance measures are aggregated to form a single total distance measure. The total distance measure is applied to the final classification decisions. The actual classification process is still based on the nearest prototype vector principle; a sample belongs to the class represented by the nearest prototype vector when measured with the optimized total distance measure. During the training process the differential evolution algorithm determines the optimal class vectors, selects optimal distance metrics, and determines the optimal values for the free parameters of each selected distance measure. The results obtained with the above method confirm that the choice of distance measure is one of the most crucial factors for obtaining higher classification accuracy. The results also demonstrate that it is possible to build a classifier that is able to select the optimal distance measure for the given data set automatically and systematically. After finding optimal distance measures together with optimal parameters from the particular distance measure results are then aggregated to form a total distance, which will be used to form the deviation between the class vectors and samples and thus classify the samples. This thesis also discusses two types of aggregation operators, namely, ordered weighted averaging (OWA) based multi-distances and generalized ordered weighted averaging (GOWA). These aggregation operators were applied in this work to the aggregation of the normalized distance values. The results demonstrate that a proper combination of aggregation operator and weight generation scheme play an important role in obtaining good classification accuracy. The main outcomes of the work are the six new generalized versions of previous method called differential evolution classifier. All these DE classifier demonstrated good results in the classification tasks.
Resumo:
Background: Recent recommendations aim to improve cardiovascular health (CVH) by encouraging the general population to meet positive and modifiable ideal CVH metrics: not smoking, being physically active, and maintaining normal weight, blood pressure, blood glucose and total cholesterol levels and a healthy diet. Aims: The aim of the present study was to report the prevalence of ideal CVH in children and young adults and study the associations of CVH metrics with markers of subclinical atherosclerosis. Participants and methods: The present thesis is part of the Cardiovascular Risk in Young Finns Study (Young Finns Study). Data on associations of CVH metrics and subclinical atherosclerosis were available from 1,898 Young Finns Study participants. In addition, joint analyses were performed combining data from the International Childhood Cardiovascular Cohort (i3C) Consortium member studies from Australia and the USA. Results: None of the participants met all 7 CVH metrics and thus had ideal CVH in childhood and only 1% had ideal CVH as young adults. The number of CVH metrics present in childhood and adulthood predicted lower carotid artery intima-media thickness, improved carotid artery distensibility and lower risk of coronary artery calcification. Those who improved their CVH status from childhood to adulthood had a comparable risk of subclinical atherosclerosis to participants who had always had a high CVH status. Conclusions: Ideal CVH proved to be rare among children and young adults. A higher number of ideal CVH metrics and improvement of CVH status between childhood and adulthood predicted a lower risk of subclinical atherosclerosis.
Resumo:
This work goes through the concept of usability in general and healthcare, especially prenatal healthcare, context. Different frameworks and guidelines used to measure it are considered. A collection of metrics is suggested to be used at a prenatal unit of one Finnish healthcare district. The metrics consist of a set of 12 general measures and a supplementary System Usability Scale questionnaire including a Fun Toolkit Smileyometer. The metrics are tested in real life work situations by observing meetings with patients and presenting the questionnaire for the focus group personnel. A total of 6 focus group patient meetings were observed. This work suggests that in order to get more conclusive data from the metrics the focus groups need to be more involved and observation situations need to be more controlled. Revised metrics consist of the 12 general measures.
Resumo:
Feature extraction is the part of pattern recognition, where the sensor data is transformed into a more suitable form for the machine to interpret. The purpose of this step is also to reduce the amount of information passed to the next stages of the system, and to preserve the essential information in the view of discriminating the data into different classes. For instance, in the case of image analysis the actual image intensities are vulnerable to various environmental effects, such as lighting changes and the feature extraction can be used as means for detecting features, which are invariant to certain types of illumination changes. Finally, classification tries to make decisions based on the previously transformed data. The main focus of this thesis is on developing new methods for the embedded feature extraction based on local non-parametric image descriptors. Also, feature analysis is carried out for the selected image features. Low-level Local Binary Pattern (LBP) based features are in a main role in the analysis. In the embedded domain, the pattern recognition system must usually meet strict performance constraints, such as high speed, compact size and low power consumption. The characteristics of the final system can be seen as a trade-off between these metrics, which is largely affected by the decisions made during the implementation phase. The implementation alternatives of the LBP based feature extraction are explored in the embedded domain in the context of focal-plane vision processors. In particular, the thesis demonstrates the LBP extraction with MIPA4k massively parallel focal-plane processor IC. Also higher level processing is incorporated to this framework, by means of a framework for implementing a single chip face recognition system. Furthermore, a new method for determining optical flow based on LBPs, designed in particular to the embedded domain is presented. Inspired by some of the principles observed through the feature analysis of the Local Binary Patterns, an extension to the well known non-parametric rank transform is proposed, and its performance is evaluated in face recognition experiments with a standard dataset. Finally, an a priori model where the LBPs are seen as combinations of n-tuples is also presented
Resumo:
To study the effect of age on the metrics of upper and lower eyelid saccades, eyelid movement of two groups of 30 subjects each were measured using computed image analysis. The patients were divided on the basis of age into a younger group (20-30 years) and an older group (60-91 years). Eyelid saccade functions were fitted by the damped harmonic oscillator model. Amplitude and peak velocity were used to compare the effect of age on the saccades of the upper and lower eyelid. There was no statistically significant difference in saccade amplitude between groups for the upper eyelid (mean ± SEM; upward, young = 9.18 ± 0.32 mm, older = 8.93 ± 0.31 mm, t = 0.56, P = 0.58; downward, young = 9.11 ± 0.27 mm, older = 8.86 ± 0.32 mm, t = 0.58, P = 0.56) However, there was a clear decline in the peak velocity of the upper eyelid saccades of older subjects (upward, young = 59.06 ± 2.34 mm/s, older = 50.12 ± 1.95 mm/s, t = 2.93, P = 0.005; downward, young = 71.78 ± 1.78 mm/s, older = 60.29 ± 2.62 mm/s, t = 3.63, P = 0.0006). In contrast, for the lower eyelid there was a clear increase of saccade amplitude in the elderly group (upward, young = 2.27 ± 0.09 mm, older = 2.98 ± 0.15 mm, t = 4.33, P < 0.0001; downward, young = 2.21 ± 0.10 mm, older = 2.96 ± 0.17 mm, t = 3.85, P < 0.001). These data suggest that the aging process affects the metrics of the lid saccades in a different manner according to the eyelid. In the upper eyelid the lower tension exerted by a weak aponeurosis is reflected only on the peak velocity of the saccades. In the lower eyelid, age is accompanied by an increase in saccade amplitude which indicates that the force transmission to the lid is not affected in the elderly.
Resumo:
The purpose of this research was to define content marketing and to discover how content marketing performance can be measured especially on YouTube. Further, the aim was to find out what companies are doing to measure content marketing and what kind of challenges they face in the process. In addition, preferences concerning the measurement were examined. The empirical part was conducted through multiple-case study and cross-case analysis methods. The qualitative data was collected from four large companies in Finnish food and drink industry through semi-structured phone interviews. As a result of this research, a new definition for content marketing was derived. It is suggested that return on objective, or in this case, brand awareness and engagement are used as the main metrics of content marketing performance on YouTube. The major challenge is the nature of the industry, as companies cannot connect the outcome directly to sales.
Resumo:
Invokaatio: D.A.G.
Resumo:
This thesis examines how content marketing is used in B2B customer acquisition and how content marketing performance measurement system is built and utilized in this context. Literature related to performance measurement, branding and buyer behavior is examined in the theoretical part in order to identify the elements influence on content marketing performance measurement design and usage. Qualitative case study is chosen in order to gain deep understanding of the phenomenon studied. The case company is a Finnish software vendor, which operates in B2B markets and has practiced content marketing for approximately two years. The in-depth interviews were conducted with three employees from marketing department. According to findings content marketing performance measurement system’s infrastructure is based on target market’s decision making processes, company’s own customer acquisition process, marketing automation tool and analytics solutions. The main roles of content marketing performance measurement system are measuring performance, strategy management and learning and improvement. Content marketing objectives in the context of customer acquisition are enhancing brand awareness, influencing brand attitude and lead generation. Both non-financial and financial outcomes are assessed by single phase specific metrics, phase specific overall KPIs and ratings related to lead’s involvement.
Resumo:
This research studied the project performance measurement from the perspective of strategic management. The objective was to find a generic model for project performance measurement that emphasizes strategy and decision making. Research followed the guidelines of a constructive research methodology. As a result, the study suggests a model that measures projects with multiple meters during and after projects. Measurement after the project is suggested to be linked to the strategic performance measures of a company. The measurement should be conducted with centralized project portfolio management e.g. using the project management office in the organization. Metrics, after the project, measure the project’s actual benefit realization. During the project, the metrics are universal and they measure the accomplished objectives relation to costs, schedule and internal resource usage. Outcomes of these measures should be forecasted by using qualitative or stochastic methods. Solid theoretical background for the model was found from the literature that covers the subjects of performance measurement, projects and uncertainty. The study states that the model can be implemented in companies. This statement is supported by empirical evidence from a single case study. The gathering of empiric evidence about the actual usefulness of the model in companies is left to be done by the evaluative research in the future.
Resumo:
The vast majority of our contemporary society owns a mobile phone, which has resulted in a dramatic rise in the amount of networked computers in recent years. Security issues in the computers have followed the same trend and nearly everyone is now affected by such issues. How could the situation be improved? For software engineers, an obvious answer is to build computer software with security in mind. A problem with building software with security is how to define secure software or how to measure security. This thesis divides the problem into three research questions. First, how can we measure the security of software? Second, what types of tools are available for measuring security? And finally, what do these tools reveal about the security of software? Measuring tools of these kind are commonly called metrics. This thesis is focused on the perspective of software engineers in the software design phase. Focus on the design phase means that code level semantics or programming language specifics are not discussed in this work. Organizational policy, management issues or software development process are also out of the scope. The first two research problems were studied using a literature review while the third was studied using a case study research. The target of the case study was a Java based email server called Apache James, which had details from its changelog and security issues available and the source code was accessible. The research revealed that there is a consensus in the terminology on software security. Security verification activities are commonly divided into evaluation and assurance. The focus of this work was in assurance, which means to verify one’s own work. There are 34 metrics available for security measurements, of which five are evaluation metrics and 29 are assurance metrics. We found, however, that the general quality of these metrics was not good. Only three metrics in the design category passed the inspection criteria and could be used in the case study. The metrics claim to give quantitative information on the security of the software, but in practice they were limited to evaluating different versions of the same software. Apart from being relative, the metrics were unable to detect security issues or point out problems in the design. Furthermore, interpreting the metrics’ results was difficult. In conclusion, the general state of the software security metrics leaves a lot to be desired. The metrics studied had both theoretical and practical issues, and are not suitable for daily engineering workflows. The metrics studied provided a basis for further research, since they pointed out areas where the security metrics were necessary to improve whether verification of security from the design was desired.
Resumo:
The objective of the study is to extend the existing hedging literature of the commodity price risks by investigating what kind of hedging strategies can be used in companies using bitumen as raw material in their production. Five different alternative swap hedging strategies in bitumen markets are empirically tested. Strategies tested are full hedge strategy, simple, conservative, and aggressive term structure strategies, and implied volatility strategy. The effectiveness of the alternative strategies is measured by excess returns compared to no hedge strategy. In addition, the downside risk of each strategy is measured with target absolute semi-deviation. Results indicate that any of the tested strategies does not outperform the no hedge strategy in terms of excess returns in all maturities. The best-performing aggressive term structure strategy succeeds to create positive excess returns only in short maturities. However, risk seems to increase hand-in-hand with the excess returns so that the best-performing strategies get the highest risk metrics as well. This implicates that the company willing to gain from favorable price movements must be ready to bear a greater risk. Thus, no superior hedging strategy over the others is found.
Resumo:
Verkkopalveluiden ylläpitovaiheessa halutaan varmistua, etteivät palveluun tehdyt muutokset aiheuta verkkopalvelussa virhetilanteita ja palvelu toimii moitteetta. Muutoksen hyväksyntätestaus voidaan tehdä regressiotestauksena vertaamalla palvelun tilaa ennen ja jälkeen muutoksen. Sisältöpainotteisessa verkkopalvelussa testaaminen keskittyy loppukäyttäjälle esitetyn sivun semanttiseen sekä visuaaliseen oikeellisuuteen sekä erilaisiin toiminnallisiin testeihin. Työssä tarkastellaan etenkin suositulla WordPress-julkaisujärjestelmällä toteutettujen verkkopalveluiden ylläpitoa. Keskeisenä osana julkaisujärjestelmillä toteutettujen verkkopalveluiden ylläpitoa on julkaisujärjestelmän ja sitä täydentävien lisäosien päivittämistä ajantasaisiin versioihin. Nämä päivitykset paitsi tuovat uusia ominaisuuksia verkkopalvelun kehittäjille, myös paikkaavat järjestelmän tietoturvahaavoittuvuuksia sekä korjaavat aiemmissa versioissa esiintyneitä virheitä. Tässä työssä kehitettiin kohdeyrityksen aiempia verkkopalveluiden ylläpitoprosesseja niissä tunnistettujen kehityskohteiden perusteella. Uudistettu kokonaisuus jakautuu kahteen kokonaisuuteen: päivitystarpeen seurantaan sekä päivitysten tekemiseen. Päivitystarpeen seurantaa varten kehitettiin uusi työkalu helpottamaan kokonaiskuvan hahmottamista. Päivitysten tekemisen osalta työssä keskityttiin automatisoidun regressiotestauksen kehittämiseen, missä tärkeimpänä testauskeinona käytetään verkkopalvelusta tallennettujen kuvankaappausten vertailuun perustuvaa visuaalista testausta. Uusien ylläpitoprosesseille määriteltiin myös seurannan kohteet uudistuksen onnistumisen ja jatkokehityksen arviointia varten.