927 resultados para Customer feature selection
Resumo:
Given $n$ independent replicates of a jointly distributed pair $(X,Y)\in {\cal R}^d \times {\cal R}$, we wish to select from a fixed sequence of model classes ${\cal F}_1, {\cal F}_2, \ldots$ a deterministic prediction rule $f: {\cal R}^d \to {\cal R}$ whose risk is small. We investigate the possibility of empirically assessingthe {\em complexity} of each model class, that is, the actual difficulty of the estimation problem within each class. The estimated complexities are in turn used to define an adaptive model selection procedure, which is based on complexity penalized empirical risk.The available data are divided into two parts. The first is used to form an empirical cover of each model class, and the second is used to select a candidate rule from each cover based on empirical risk. The covering radii are determined empirically to optimize a tight upper bound on the estimation error. An estimate is chosen from the list of candidates in order to minimize the sum of class complexity and empirical risk. A distinguishing feature of the approach is that the complexity of each model class is assessed empirically, based on the size of its empirical cover.Finite sample performance bounds are established for the estimates, and these bounds are applied to several non-parametric estimation problems. The estimates are shown to achieve a favorable tradeoff between approximation and estimation error, and to perform as well as if the distribution-dependent complexities of the model classes were known beforehand. In addition, it is shown that the estimate can be consistent,and even possess near optimal rates of convergence, when each model class has an infinite VC or pseudo dimension.For regression estimation with squared loss we modify our estimate to achieve a faster rate of convergence.
Resumo:
Many traits and/or strategies expressed by organisms are quantitative phenotypes. Because populations are of finite size and genomes are subject to mutations, these continuously varying phenotypes are under the joint pressure of mutation, natural selection and random genetic drift. This article derives the stationary distribution for such a phenotype under a mutation-selection-drift balance in a class-structured population allowing for demographically varying class sizes and/or changing environmental conditions. The salient feature of the stationary distribution is that it can be entirely characterized in terms of the average size of the gene pool and Hamilton's inclusive fitness effect. The exploration of the phenotypic space varies exponentially with the cumulative inclusive fitness effect over state space, which determines an adaptive landscape. The peaks of the landscapes are those phenotypes that are candidate evolutionary stable strategies and can be determined by standard phenotypic selection gradient methods (e.g. evolutionary game theory, kin selection theory, adaptive dynamics). The curvature of the stationary distribution provides a measure of the stability by convergence of candidate evolutionary stable strategies, and it is evaluated explicitly for two biological scenarios: first, a coordination game, which illustrates that, for a multipeaked adaptive landscape, stochastically stable strategies can be singled out by letting the size of the gene pool grow large; second, a sex-allocation game for diploids and haplo-diploids, which suggests that the equilibrium sex ratio follows a Beta distribution with parameters depending on the features of the genetic system.
Resumo:
Executive Summary The unifying theme of this thesis is the pursuit of a satisfactory ways to quantify the riskureward trade-off in financial economics. First in the context of a general asset pricing model, then across models and finally across country borders. The guiding principle in that pursuit was to seek innovative solutions by combining ideas from different fields in economics and broad scientific research. For example, in the first part of this thesis we sought a fruitful application of strong existence results in utility theory to topics in asset pricing. In the second part we implement an idea from the field of fuzzy set theory to the optimal portfolio selection problem, while the third part of this thesis is to the best of our knowledge, the first empirical application of some general results in asset pricing in incomplete markets to the important topic of measurement of financial integration. While the first two parts of this thesis effectively combine well-known ways to quantify the risk-reward trade-offs the third one can be viewed as an empirical verification of the usefulness of the so-called "good deal bounds" theory in designing risk-sensitive pricing bounds. Chapter 1 develops a discrete-time asset pricing model, based on a novel ordinally equivalent representation of recursive utility. To the best of our knowledge, we are the first to use a member of a novel class of recursive utility generators to construct a representative agent model to address some long-lasting issues in asset pricing. Applying strong representation results allows us to show that the model features countercyclical risk premia, for both consumption and financial risk, together with low and procyclical risk free rate. As the recursive utility used nests as a special case the well-known time-state separable utility, all results nest the corresponding ones from the standard model and thus shed light on its well-known shortcomings. The empirical investigation to support these theoretical results, however, showed that as long as one resorts to econometric methods based on approximating conditional moments with unconditional ones, it is not possible to distinguish the model we propose from the standard one. Chapter 2 is a join work with Sergei Sontchik. There we provide theoretical and empirical motivation for aggregation of performance measures. The main idea is that as it makes sense to apply several performance measures ex-post, it also makes sense to base optimal portfolio selection on ex-ante maximization of as many possible performance measures as desired. We thus offer a concrete algorithm for optimal portfolio selection via ex-ante optimization over different horizons of several risk-return trade-offs simultaneously. An empirical application of that algorithm, using seven popular performance measures, suggests that realized returns feature better distributional characteristics relative to those of realized returns from portfolio strategies optimal with respect to single performance measures. When comparing the distributions of realized returns we used two partial risk-reward orderings first and second order stochastic dominance. We first used the Kolmogorov Smirnov test to determine if the two distributions are indeed different, which combined with a visual inspection allowed us to demonstrate that the way we propose to aggregate performance measures leads to portfolio realized returns that first order stochastically dominate the ones that result from optimization only with respect to, for example, Treynor ratio and Jensen's alpha. We checked for second order stochastic dominance via point wise comparison of the so-called absolute Lorenz curve, or the sequence of expected shortfalls for a range of quantiles. As soon as the plot of the absolute Lorenz curve for the aggregated performance measures was above the one corresponding to each individual measure, we were tempted to conclude that the algorithm we propose leads to portfolio returns distribution that second order stochastically dominates virtually all performance measures considered. Chapter 3 proposes a measure of financial integration, based on recent advances in asset pricing in incomplete markets. Given a base market (a set of traded assets) and an index of another market, we propose to measure financial integration through time by the size of the spread between the pricing bounds of the market index, relative to the base market. The bigger the spread around country index A, viewed from market B, the less integrated markets A and B are. We investigate the presence of structural breaks in the size of the spread for EMU member country indices before and after the introduction of the Euro. We find evidence that both the level and the volatility of our financial integration measure increased after the introduction of the Euro. That counterintuitive result suggests the presence of an inherent weakness in the attempt to measure financial integration independently of economic fundamentals. Nevertheless, the results about the bounds on the risk free rate appear plausible from the view point of existing economic theory about the impact of integration on interest rates.
Resumo:
BACKGROUND: As the long-term survival of pancreatic head malignancies remains dismal, efforts have been made for a better patient selection and a tailored treatment. Tumour size could also be used for patient stratification. METHODS: One hundred and fourteen patients underwent a pancreaticoduodenectomy for pancreatic adenocarcinoma, peri-ampullary and biliary cancer stratified according to: ≤20 mm, 21-34 mm, 35-45 mm and >45 mm tumour size. RESULTS: Patients with tumour sizes of ≤20 mm had a N1 rate of 41% and a R1/2 rate of 7%. The median survival was 3.4 years. N1 and R1/2 rates increased to 84% and 31% for tumour sizes of 21-34 mm (P = 0.0002 for N, P = 0.02 for R). The median survival decreased to 1.6 years (P = 0.0003). A further increase in tumour size of 35-45 mm revealed a further increase of N1 and R1/2 rates of 93% (P < 0.0001) and 33%, respectively. The median survival was 1.2 years (P = 0.004). Tumour sizes >45 mm were related to a further decreased median survival of 1.1 years (P = 0.2), whereas N1 and R1/2 rates were 87% and 20%, respectively. DISCUSSION: Tumour size is an important feature of pancreatic head malignancies. A tumour diameter of 20 mm seems to be the cut-off above which an increased rate of incomplete resections and metastatic lymph nodes must be encountered and the median survival is reduced.
Resumo:
Työn tavoitteena on kartoittaa ja arvioida asiakastarpeita hienojakoisen hiilen ja nesteen erotuksessa. Aluksi työssä kuvataan hiiliteollisuutta, jonka jälkeen syvennytäänhiilen ja nesteen erotukseen. Tämän jälkeen keskitytään asiakastarpeiden kartoittamiseen. Jo olemassaolevan tiedon keräämiseen käytetään haastatteluja ja kysymyslomakkeita. Saatyn AHP-mallia hyödynnetään asiakastarpeiden arvioinnissa. Yksi suurimmista haasteista puhtaan hiiliteknologian käytössä on kustannustehokas nesteen ja hienojakoisen hiilen erotus, joka on tärkeää rahtauskustannusten minimoinnin, laatuvaatimusten täyttämisen ja prosessiveden kierrättämisen kannalta. Tekniset ominaisuudet ja kustannukset ovat tärkeimmät ominaisuudet hiilen ja veden suodatinratkaisussa asiantuntijoiden mukaan. Asiakkaan mukaan laatu, tekniset ominaisuudet ja tukipalvelut ovat tärkeitä.Sekä asiakkaan että asiantuntijoiden mielestä korkea yksikkökapasiteetti, matala lopputuotteen kosteus ja luotettavuus ovat tärkeimmät tekniset ominaisuudet. Investointikustannukset ovat noin kolme kertaa tärkeämpiä kuin käyttökustannukset. Asiakkaan mukaan laitetoimittajan ominaisuudet ovat tärkeämpiä kuin teknologiset ominaisuudet.
Resumo:
Abstract The research problem in the thesis deals with improving the responsiveness and efficiency of logistics service processes between a supplier and its customers. The improvement can be sought by customizing the services and increasing the coordination of activities between the different parties in the supply chain. It is argued that to achieve coordination the parties have to have connections on several levels. In the framework employed in this research, three contexts are conceptualized at which the linkages can be planned: 1) the service policy context, 2) the process coordination context, and 3) the relationship management context. The service policy context consists of the planning methods by which a supplier analyzes its customers' logistics requirements and matches them with its own operational environment and efficiency requirements. The main conclusion related to the service policy context is that it is important to have a balanced selection of both customer-related and supplier-related factors in the analysis. This way, while the operational efficiency is planned a sufficient level of service for the most important customers is assured. This kind of policy planning involves taking multiple variables into the analysis, and there is a need to develop better tools for this purpose. Some new approaches to deal with this are presented in the thesis.The process coordination context and the relationship management context deal with the issues of how the implementation of the planned service policies can be facilitated in an inter-organizational environment. Process coordination includes typically such mechanisms as control rules, standard procedures and programs, but inhighly demanding circumstances more integrative coordination mechanisms may be necessary. In the thesis the coordination problems in third-party logistics relationship are used as an example of such an environment. Relationship management deals with issues of how separate companies organize their relationships to improve the coordination of their common processes. The main implication related to logistics planning is that by integrating further at the relationship level, companies can facilitate the use of the most efficient coordination mechanisms and thereby improve the implementation of the selected logistics service policies. In the thesis, a case of a logistics outsourcing relationship is used to demonstrate the need to address the relationship issues between the service provider andthe service buyer before the outsourcing can be done.The dissertation consists of eight research articles and a summarizing report. The principal emphasis in the articles is on the service policy planning context, which is the main theme of six articles. Coordination and relationship issues are specifically addressed in two of the papers.
Resumo:
Current technology trends in medical device industry calls for fabrication of massive arrays of microfeatures such as microchannels on to nonsilicon material substrates with high accuracy, superior precision, and high throughput. Microchannels are typical features used in medical devices for medication dosing into the human body, analyzing DNA arrays or cell cultures. In this study, the capabilities of machining systems for micro-end milling have been evaluated by conducting experiments, regression modeling, and response surface methodology. In machining experiments by using micromilling, arrays of microchannels are fabricated on aluminium and titanium plates, and the feature size and accuracy (width and depth) and surface roughness are measured. Multicriteria decision making for material and process parameters selection for desired accuracy is investigated by using particle swarm optimization (PSO) method, which is an evolutionary computation method inspired by genetic algorithms (GA). Appropriate regression models are utilized within the PSO and optimum selection of micromilling parameters; microchannel feature accuracy and surface roughness are performed. An analysis for optimal micromachining parameters in decision variable space is also conducted. This study demonstrates the advantages of evolutionary computing algorithms in micromilling decision making and process optimization investigations and can be expanded to other applications
Resumo:
In this thesis the main objective is to examine and model configuration system and related processes. When and where configuration information is created in product development process and how it is utilized in order-delivery process? These two processes are the essential part of the whole configuration system from the information point of view. Empirical part of the work was done as a constructive research inside a company that follows a mass customization approach. Data models and documentation are created for different development stages of the configuration system. A base data model already existed for new structures and relations between these structures. This model was used as the basis for the later data modeling work. Data models include different data structures, their key objects and attributes, and relations between. Representation of configuration rules for the to-be configuration system was defined as one of the key focus point. Further, it is examined how the customer needs and requirements information can be integrated into the product development process. Requirements hierarchy and classification system is presented. It is shown how individual requirement specifications can be connected for physical design structure via features by developing the existing base data model further.
Resumo:
Pro gradu –tutkielman tavoitteena on tutkia asiakasarvoa ja sitä, miten asiakasarvoa voidaan käyttää hyväksi uusasiakashankinnassa. Tällä hetkellä kirjallisuudessa on pinnalla muutos tuotekeskeisyydestä asiakaskeskeiseen näkökulmaan, joka tunnistaa asiakasarvon tärkeyden bisnes suhteissa. Tämä tutkimus osallistuu kyseiseen keskusteluun muodostamalla tavan mitata asiakasarvoa, ja peilaamalla saavutettuja tuloksia uusasiakashankinta prosessiin. Empiirinen tutkimus on toteutettu kahdessa osassa: kvalitatiivisessa sekä kvantitatiivisessa. Ensimmäisessä osassa haastateltiin kahdeksaa potentiaalista asiakasta, minkä jälkeen saadut tulokset vietiin suurempaan skaalaan toteuttamalla kysely suurelle joukolle potentiaalisia asiakkaita. Lopulliset tulokset osoittavat, että asiakasarvon käyttäminen hyväksi uusasiakashankinnassa on erittäin tehokas ja käyttökelpoinen metodi. Asiakasarvoon perustuvat asiakassegmentit mahdollistavat oikeiden arvojen kommunikoinnin oikeille segmenteille. Se antaa yritykselle myös mahdollisuuden valita houkuttelevimmat asiakasryhmät ja vahvistaa asiakaskantaansa.
Resumo:
In recent years, progress in the area of mobile telecommunications has changed our way of life, in the private as well as the business domain. Mobile and wireless networks have ever increasing bit rates, mobile network operators provide more and more services, and at the same time costs for the usage of mobile services and bit rates are decreasing. However, mobile services today still lack functions that seamlessly integrate into users’ everyday life. That is, service attributes such as context-awareness and personalisation are often either proprietary, limited or not available at all. In order to overcome this deficiency, telecommunications companies are heavily engaged in the research and development of service platforms for networks beyond 3G for the provisioning of innovative mobile services. These service platforms are to support such service attributes. Service platforms are to provide basic service-independent functions such as billing, identity management, context management, user profile management, etc. Instead of developing own solutions, developers of end-user services such as innovative messaging services or location-based services can utilise the platform-side functions for their own purposes. In doing so, the platform-side support for such functions takes away complexity, development time and development costs from service developers. Context-awareness and personalisation are two of the most important aspects of service platforms in telecommunications environments. The combination of context-awareness and personalisation features can also be described as situation-dependent personalisation of services. The support for this feature requires several processing steps. The focus of this doctoral thesis is on the processing step, in which the user’s current context is matched against situation-dependent user preferences to find the matching user preferences for the current user’s situation. However, to achieve this, a user profile management system and corresponding functionality is required. These parts are also covered by this thesis. Altogether, this thesis provides the following contributions: The first part of the contribution is mainly architecture-oriented. First and foremost, we provide a user profile management system that addresses the specific requirements of service platforms in telecommunications environments. In particular, the user profile management system has to deal with situation-specific user preferences and with user information for various services. In order to structure the user information, we also propose a user profile structure and the corresponding user profile ontology as part of an ontology infrastructure in a service platform. The second part of the contribution is the selection mechanism for finding matching situation-dependent user preferences for the personalisation of services. This functionality is provided as a sub-module of the user profile management system. Contrary to existing solutions, our selection mechanism is based on ontology reasoning. This mechanism is evaluated in terms of runtime performance and in terms of supported functionality compared to other approaches. The results of the evaluation show the benefits and the drawbacks of ontology modelling and ontology reasoning in practical applications.
Resumo:
Supplier selection has a great impact on supply chain management. The quality of supplier selection also affects profitability of organisations which work in the supply chain. As suppliers can provide variety of services and customers demand higher quality of service provision, the organisation is facing challenges for making the right choice of supplier for the right needs. The existing methods for supplier selection, such as data envelopment analysis (DEA) and analytical hierarchy process (AHP) can automatically perform selection of competitive suppliers and further decide winning supplier(s). However, these methods are not capable of determining the right selection criteria which should be derived from the business strategy. An ontology model described in this paper integrates the strengths of DEA and AHP with new mechanisms which ensure the right supplier to be selected by the right criteria for the right customer's needs.
Resumo:
Gaussian multi-scale representation is a mathematical framework that allows to analyse images at different scales in a consistent manner, and to handle derivatives in a way deeply connected to scale. This paper uses Gaussian multi-scale representation to investigate several aspects of the derivation of atmospheric motion vectors (AMVs) from water vapour imagery. The contribution of different spatial frequencies to the tracking is studied, for a range of tracer sizes, and a number of tracer selection methods are presented and compared, using WV 6.2 images from the geostationary satellite MSG-2.
Resumo:
We investigated the roles of top-down task set and bottom-up stimulus salience for feature-specific attentional capture. Spatially nonpredictive cues preceded search arrays that included a color-defined target. For target-color singleton cues, behavioral spatial cueing effects were accompanied by cueinduced N2pc components, indicative of attentional capture. These effects were only minimally attenuated for nonsingleton target-color cues, underlining the dominance of top-down task set over salience in attentional capture. Nontarget-color singleton cues triggered no N2pc, but instead an anterior N2 component indicative of top-down inhibition. In Experiment 2, inverted behavioral cueing effects of these cues were accompanied by a delayed N2pc to targets at cued locations, suggesting that perceptually salient but task-irrelevant visual events trigger location-specific inhibition mechanisms that can delay subsequent target selection.
Resumo:
Background: The increasing number of genomic sequences of bacteria makes it possible to select unique SNPs of a particular strain/species at the whole genome level and thus design specific primers based on the SNPs. The high similarity of genomic sequences among phylogenetically-related bacteria requires the identification of the few loci in the genome that can serve as unique markers for strain differentiation. PrimerSNP attempts to identify reliable strain-specific markers, on which specific primers are designed for pathogen detection purpose.Results: PrimerSNP is an online tool to design primers based on strain specific SNPs for multiple strains/species of microorganisms at the whole genome level. The allele-specific primers could distinguish query sequences of one strain from other homologous sequences by standard PCR reaction. Additionally, PrimerSNP provides a feature for designing common primers that can amplify all the homologous sequences of multiple strains/species of microorganisms. PrimerSNP is freely available at http://cropdisease.ars.usda.gov/similar to primer.Conclusion: PrimerSNP is a high-throughput specific primer generation tool for the differentiation of phylogenetically-related strains/species. Experimental validation showed that this software had a successful prediction rate of 80.4 - 100% for strain specific primer design.
Resumo:
The main purpose of this work is the development of computational tools in order to assist the on-line automatic detection of burn in the surface grinding process. Most of the parameters currently employed in the burning recognition (DPO, FKS, DPKS, DIFP, among others) do not incorporate routines for automatic selection of the grinding passes, therefore, requiring the user's interference for the choice of the active region. Several methods were employed in the passes extraction; however, those with the best results are presented in this article. Tests carried out in a surface-grinding machine have shown the success of the algorithms developed for pass extraction. Copyright © 2007 by ABCM.