969 resultados para Supplier selection problem
Resumo:
Red light running (RLR) is a problem in the US that has resulted in 165,000 injuries and 907 fatalities annually. In Iowa, RLR-related crashes make up 24.5 percent of all crashes and account for 31.7 percent of fatal and major injury crashes at signalized intersections. RLR crashes are a safety concern due to the increased likelihood of injury compared to other types of crashes. One tool used to combat red light running is automated enforcement in the form of RLR cameras. Automated enforcement, while effective, is often controversial. Cedar Rapids, Iowa installed RLR and speeding cameras at seven intersections across the city. The intersections were chosen based on crash rates and whether cameras could feasibly be placed at the intersection approaches. The cameras were placed starting in February 2010 with the last one becoming operational in December 2010. An analysis of the effect of the cameras on safety at these intersections was determined prudent in helping to justify the installation and effectiveness of the cameras. The objective of this research was to assess the safety effectiveness of the RLR program that has been implemented in Cedar Rapids. This was accomplished by analyzing data to determine changes in the following metrics: Reductions in red light violation rates based on overall changes, time of day changes, and changes by lane Effectiveness of the cameras over time Time in which those running the red light enter the intersection Changes in the average headway between vehicles entering the intersection
Resumo:
Red light running (RLR) is a problem in the US that has resulted in 165,000 injuries and 907 fatalities annually. In Iowa, RLR-related crashes make up 24.5 percent of all crashes and account for 31.7 percent of fatal and major injury crashes at signalized intersections. RLR crashes are a safety concern due to the increased likelihood of injury compared to other types of crashes. One tool used to combat red light running is automated enforcement in the form of RLR cameras. Automated enforcement, while effective, is often controversial. Cedar Rapids, Iowa installed RLR and speeding cameras at seven intersections across the city. The intersections were chosen based on crash rates and whether cameras could feasibly be placed at the intersection approaches. The cameras were placed starting in February 2010 with the last one becoming operational in December 2010. An analysis of the effect of the cameras on safety at these intersections was determined prudent in helping to justify the installation and effectiveness of the cameras. The objective of this research was to assess the safety effectiveness of the RLR program that has been implemented in Cedar Rapids. This was accomplished by analyzing data to determine changes in the following metrics: Reductions in red light violation rates based on overall changes, time of day changes, and changes by lane Effectiveness of the cameras over time Time in which those running the red light enter the intersection Changes in the average headway between vehicles entering the intersection
Resumo:
The number of existing protein sequences spans a very small fraction of sequence space. Natural proteins have overcome a strong negative selective pressure to avoid the formation of insoluble aggregates. Stably folded globular proteins and intrinsically disordered proteins (IDP) use alternative solutions to the aggregation problem. While in globular proteins folding minimizes the access to aggregation prone regions IDPs on average display large exposed contact areas. Here, we introduce the concept of average meta-structure correlation map to analyze sequence space. Using this novel conceptual view we show that representative ensembles of folded and ID proteins show distinct characteristics and responds differently to sequence randomization. By studying the way evolutionary constraints act on IDPs to disable a negative function (aggregation) we might gain insight into the mechanisms by which function - enabling information is encoded in IDPs.
Resumo:
Most local agencies in Iowa currently make their pavement treatment decisions based on their limited experience due primarily to lack of a systematic decision-making framework and a decision-aid tool. The lack of objective condition assessment data of agency pavements also contributes to this problem. This study developed a systematic pavement treatment selection framework for local agencies to assist them in selecting the most appropriate treatment and to help justify their maintenance and rehabilitation decisions. The framework is based on an extensive literature review of the various pavement treatment techniques in terms of their technical applicability and limitations, meaningful practices of neighboring states, and the results of a survey of local agencies. The treatment selection framework involves three different steps: pavement condition assessment, selection of technically feasible treatments using decision trees, and selection of the most appropriate treatment considering the return-on-investment (ROI) and other non-economic factors. An Excel-based spreadsheet tool that automates the treatment selection framework was also developed, along with a standalone user guide for the tool. The Pavement Treatment Selection Tool (PTST) for Local Agencies allows users to enter the severity and extent levels of existing distresses and then, recommends a set of technically feasible treatments. The tool also evaluates the ROI of each feasible treatment and, if necessary, it can also evaluate the non-economic value of each treatment option to help determine the most appropriate treatment for the pavement. It is expected that the framework and tool will help local agencies improve their pavement asset management practices significantly and make better economic and defensible decisions on pavement treatment selection.
Resumo:
With nearly 2,000 free and open source software (FLOSS) licenses, software license proliferation¿ can be a major headache for software development organizations trying to speed development through software component reuse, as well as companies redistributing software packages as components of their products. Scope is one problem: from the Free Beer license to the GPL family of licenses to platform-specific licenses such as Apache and Eclipse, the number and variety of licenses make it difficult for companies to ¿do the right thing¿ with respect to the software components in their products and applications. In addition to the sheer number of licenses, each license carries within it the author¿s specific definition of how the software can be used and re-used. Permissive licenses like BSD and MIT make it easy; software can be redistributed and developers can modify code without the requirement of making changes publicly available. Reciprocal licenses, on the other hand, place varying restrictions on re-use and redistribution. Woe to the developer who snags a bit of code after a simple web search without understanding the ramifications of license restrictions.
Resumo:
Abstract The research problem in the thesis deals with improving the responsiveness and efficiency of logistics service processes between a supplier and its customers. The improvement can be sought by customizing the services and increasing the coordination of activities between the different parties in the supply chain. It is argued that to achieve coordination the parties have to have connections on several levels. In the framework employed in this research, three contexts are conceptualized at which the linkages can be planned: 1) the service policy context, 2) the process coordination context, and 3) the relationship management context. The service policy context consists of the planning methods by which a supplier analyzes its customers' logistics requirements and matches them with its own operational environment and efficiency requirements. The main conclusion related to the service policy context is that it is important to have a balanced selection of both customer-related and supplier-related factors in the analysis. This way, while the operational efficiency is planned a sufficient level of service for the most important customers is assured. This kind of policy planning involves taking multiple variables into the analysis, and there is a need to develop better tools for this purpose. Some new approaches to deal with this are presented in the thesis.The process coordination context and the relationship management context deal with the issues of how the implementation of the planned service policies can be facilitated in an inter-organizational environment. Process coordination includes typically such mechanisms as control rules, standard procedures and programs, but inhighly demanding circumstances more integrative coordination mechanisms may be necessary. In the thesis the coordination problems in third-party logistics relationship are used as an example of such an environment. Relationship management deals with issues of how separate companies organize their relationships to improve the coordination of their common processes. The main implication related to logistics planning is that by integrating further at the relationship level, companies can facilitate the use of the most efficient coordination mechanisms and thereby improve the implementation of the selected logistics service policies. In the thesis, a case of a logistics outsourcing relationship is used to demonstrate the need to address the relationship issues between the service provider andthe service buyer before the outsourcing can be done.The dissertation consists of eight research articles and a summarizing report. The principal emphasis in the articles is on the service policy planning context, which is the main theme of six articles. Coordination and relationship issues are specifically addressed in two of the papers.
Resumo:
Diagnosis of community acquired legionella pneumonia (CALP) is currently performed by means of laboratory techniques which may delay diagnosis several hours. To determine whether ANN can categorize CALP and non-legionella community-acquired pneumonia (NLCAP) and be standard for use by clinicians, we prospectively studied 203 patients with community-acquired pneumonia (CAP) diagnosed by laboratory tests. Twenty one clinical and analytical variables were recorded to train a neural net with two classes (LCAP or NLCAP class). In this paper we deal with the problem of diagnosis, feature selection, and ranking of the features as a function of their classification importance, and the design of a classifier the criteria of maximizing the ROC (Receiving operating characteristics) area, which gives a good trade-off between true positives and false negatives. In order to guarantee the validity of the statistics; the train-validation-test databases were rotated by the jackknife technique, and a multistarting procedure was done in order to make the system insensitive to local maxima.
Resumo:
Tutkimuksen tavoitteena oli löytää sopivia suorituskyvyn mittareita kunnossapitopalveluyrityksen hankintatoimeen. Mittarivalinnat rajattiin käsittelemään toimittajien hallintaa, hintakehityksen ja maksujen hallintaa sekä ostotyön onnistumista. Tutkimuksessa selvitettiin myös, mihin hankintatoimen tekijöihin esimerkkiyrityksen sisäiset asiakkaat eivät olleet tyytyväisiä. Sovellettavaksi suorituskykymittaristoksi valittiin Balanced Scorecard. Materiaalihallinnon sisäiset asiakkaat kokivat asiakastyytyväisyyden ongelmakohdiksi hankintatoimen luotettavuuden, ostajan palvelualttiuden, saatavilla olon, yhteisen toiminnansuunnittelun ja ongelmista tiedottamisen puutteen, melko pitkät toimitusajat sekä hitaat varaston täytöt. Hankintatoimen ongelmakohdiksi ostajien haastatteluissa nousivat koulutuksen ja yhteisen toiminnansuunnittelun puute sekä pienten ostotilausten ja pienten toimittajien suuri määrä. Nämä tekijät huomioitiin tietyiltä osin myös mittarivalinnoissa. Materiaalihallintoon rakennettu Balanced Scorecard sopii osaston tämän hetkiseen tilanteeseen. Mittariston viidessä näkökulmassa mittareita on lukumäärältään kattavasti; Balanced Scorecard toimii enemmänkin seurantajärjestelmänä. Ajan kuluessa todennäköisesti keskitytään vain tärkeimpiin mittareihin jolloin järjestelmän ohjaavampi vaikutus korostuu. Kun mittareissa on saavutettu jatkuvaa parantumista, kannattaa harkita uusien mittareiden ja näkökulmien valintaa. Materiaalihallintoon luotiin myös yleinen Excel- tiedosto, johon merkitään kunkin mittarin sen hetkiset arvot ja josta niiden kehitystä seurataan säännöllisesti. Tutkimustyyppi työssä on konstruktiivinen. Tutkimuksen aineisto kerättiin sisäisten asiakkaiden ja ostajien teemahaastattelulla käyttäen kvalitatiivisia tutkimusmenetelmiä. Tutkimuksessa haastateltiin esimerkkiyrityksen materiaalihallinnon kaksi ostajaa sekä 21 sisäistä asiakasta, jotka ovat työnsä puolesta yhteydessä materiaalihallintoon.
Resumo:
Clines in phenotypes and genotype frequencies across environmental gradients are commonly taken as evidence for spatially varying selection. Classical examples include the latitudinal clines in various species of Drosophila, which often occur in parallel fashion on multiple continents. Today, genomewide analysis of such clinal systems provides a fantastic opportunity for unravelling the genetics of adaptation, yet major challenges remain. A well-known but often neglected problem is that demographic processes can also generate clinality, independent of or coincident with selection. A closely related issue is how to identify true genic targets of clinal selection. In this issue of Molecular Ecology, three studies illustrate these challenges and how they might be met. Bergland et al. report evidence suggesting that the well-known parallel latitudinal clines in North American and Australian D. melanogaster are confounded by admixture from Africa and Europe, highlighting the importance of distinguishing demographic from adaptive clines. In a companion study, Machado et al. provide the first genomic comparison of latitudinal differentiation in D. melanogaster and its sister species D. simulans. While D. simulans is less clinal than D. melanogaster, a significant fraction of clinal genes is shared between both species, suggesting the existence of convergent adaptation to clinaly varying selection pressures. Finally, by drawing on several independent sources of evidence, Bo?ičević et al. identify a functional network of eight clinal genes that are likely involved in cold adaptation. Together, these studies remind us that clinality does not necessarily imply selection and that separating adaptive signal from demographic noise requires great effort and care.
Resumo:
Organizing is a general problem for global firms. Firms are seeking a balance between responsiveness at the local level and efficiency through worldwide integration. In this, supply management is the focal point where external commercial supply market relations are connected with the firm's internal functions. Here, effective supplier relationship management (SRM) is essential. Global supply integration processes create new challenges for supply management professionals and new capabilities are required. Previous research has developed several models and tools for managers to manage and categorize different supplier relationship types, but the role of the firm's internal capability of managing supplier relationships in their global integration has been a clearly neglected issue. Hence, the main objective of this dissertation is to clarify how the capability of SRM may influence the firm's global competitiveness. This objective is divided into four research questions aiming to identify the elements of SRM capability, the internal factors of integration, the effect of SRM capability on strategy and how SRM capability is linked with global integration. The dissertation has two parts. The first part presents the theoretical approaches and practical implications from previous research and draws a synthesis on them. The second part comprises four empirical research papers addressing the research questions. Both qualitative and quantitative methods are utilized in this dissertation. The main contribution of this dissertation is that it aggregates the theoretical and conceptual perspectives applied to SRM research. Furthermore, given the lack of valid scales to measure capability, this study aimed to provide a foundation for an SRM capability scale by showing that the construct of SRM capability is formed of five separate elements. Moreover, SRM capability was found to be the enabler in efforts toward value chain integration. Finally, it was found that the effect of capability on global competitiveness is twofold: it reduces conflicts between responsiveness and integration, and it creates efficiency. Thus, by identifying and developing the firm's capabilities it is possible to improve performance, and hence, global competitiveness.
Resumo:
Tutkimuksen tavoite oli selvittää suorituskyvyn mittaamista, mittareita ja niiden suunnittelua tukku- ja jakeluliiketoiminnassa. Kriittisten menestystekijöiden mittarit auttavat yritystä kohti yhteistä päämäärää. Kriittisten menestystekijöiden mittarit ovat usein yhdistetty strategiseen suunnitteluun ja implementointiin ja niillä on yhtäläisyyksiä monien strategisten työkalujen kun Balanced scorecardin kanssa. Tutkimus ongelma voidaan esittää kysymyksen muodossa. •Mitkä ovat Oriola KD:n pitkänaikavälin tavoitteita tukevat kriittisten menestystekijöiden mittarit (KPIs) toimittajan ja tuotevalikoiman mittaamisessa? Tutkimus on jaettu kirjalliseen ja empiiriseen osaan. Kirjallisuus katsaus käsittelee aikaisempaa tutkimusta strategian, toimitusketjun hallinnan, toimittajan arvioinnin ja erilaisten suorituskyvyn mittaamisjärjestelmien osalta. Empiirinen osuus etenee nykytila-analyysista ehdotettuihin kriittisten menestystekijöiden mittareihin, jotka ovat kehitetty kirjallisuudesta löydetyn mallin avulla. Tutkimuksen lopputuloksena ovat case yrityksen tarpeisiin kehitetyt kriittisten menestystekijöiden mittarit toimittajan ja tuotevalikoiman arvioinnissa.
Resumo:
Personalized medicine will revolutionize our capabilities to combat disease. Working toward this goal, a fundamental task is the deciphering of geneticvariants that are predictive of complex diseases. Modern studies, in the formof genome-wide association studies (GWAS) have afforded researchers with the opportunity to reveal new genotype-phenotype relationships through the extensive scanning of genetic variants. These studies typically contain over half a million genetic features for thousands of individuals. Examining this with methods other than univariate statistics is a challenging task requiring advanced algorithms that are scalable to the genome-wide level. In the future, next-generation sequencing studies (NGS) will contain an even larger number of common and rare variants. Machine learning-based feature selection algorithms have been shown to have the ability to effectively create predictive models for various genotype-phenotype relationships. This work explores the problem of selecting genetic variant subsets that are the most predictive of complex disease phenotypes through various feature selection methodologies, including filter, wrapper and embedded algorithms. The examined machine learning algorithms were demonstrated to not only be effective at predicting the disease phenotypes, but also doing so efficiently through the use of computational shortcuts. While much of the work was able to be run on high-end desktops, some work was further extended so that it could be implemented on parallel computers helping to assure that they will also scale to the NGS data sets. Further, these studies analyzed the relationships between various feature selection methods and demonstrated the need for careful testing when selecting an algorithm. It was shown that there is no universally optimal algorithm for variant selection in GWAS, but rather methodologies need to be selected based on the desired outcome, such as the number of features to be included in the prediction model. It was also demonstrated that without proper model validation, for example using nested cross-validation, the models can result in overly-optimistic prediction accuracies and decreased generalization ability. It is through the implementation and application of machine learning methods that one can extract predictive genotype–phenotype relationships and biological insights from genetic data sets.
Resumo:
The lack of research of private real estate is a well-known problem. Earlier studies have mostly concentrated on the USA or the UK. Therefore, this master thesis offers more information about the performance and risk associated with private real estate investments in Nordic countries, but especially in Finland. The structure of this master thesis is divided into two independent sections based on the research questions. In first section, database analysis is performed to assess risk-return ratio of direct real estate investment for Nordic countries. Risk-return ratios are also assessed for different property sectors and economic regions. Finally, review of diversification strategies based on property sectors and economic regions is performed. However, standard deviation itself is not usually sufficient method to evaluate riskiness of private real estate. There is demand for more explicit assessment of property risk. One solution is property risk scoring. In second section risk scorecard based tool is built to make different real estate comparable in terms of risk. In order to do this, nine real estate professionals were interviewed to enhance the structure of theory-based risk scorecard and to assess weights for different risk factors.
Resumo:
The curse of dimensionality is a major problem in the fields of machine learning, data mining and knowledge discovery. Exhaustive search for the most optimal subset of relevant features from a high dimensional dataset is NP hard. Sub–optimal population based stochastic algorithms such as GP and GA are good choices for searching through large search spaces, and are usually more feasible than exhaustive and deterministic search algorithms. On the other hand, population based stochastic algorithms often suffer from premature convergence on mediocre sub–optimal solutions. The Age Layered Population Structure (ALPS) is a novel metaheuristic for overcoming the problem of premature convergence in evolutionary algorithms, and for improving search in the fitness landscape. The ALPS paradigm uses an age–measure to control breeding and competition between individuals in the population. This thesis uses a modification of the ALPS GP strategy called Feature Selection ALPS (FSALPS) for feature subset selection and classification of varied supervised learning tasks. FSALPS uses a novel frequency count system to rank features in the GP population based on evolved feature frequencies. The ranked features are translated into probabilities, which are used to control evolutionary processes such as terminal–symbol selection for the construction of GP trees/sub-trees. The FSALPS metaheuristic continuously refines the feature subset selection process whiles simultaneously evolving efficient classifiers through a non–converging evolutionary process that favors selection of features with high discrimination of class labels. We investigated and compared the performance of canonical GP, ALPS and FSALPS on high–dimensional benchmark classification datasets, including a hyperspectral image. Using Tukey’s HSD ANOVA test at a 95% confidence interval, ALPS and FSALPS dominated canonical GP in evolving smaller but efficient trees with less bloat expressions. FSALPS significantly outperformed canonical GP and ALPS and some reported feature selection strategies in related literature on dimensionality reduction.
Resumo:
The curse of dimensionality is a major problem in the fields of machine learning, data mining and knowledge discovery. Exhaustive search for the most optimal subset of relevant features from a high dimensional dataset is NP hard. Sub–optimal population based stochastic algorithms such as GP and GA are good choices for searching through large search spaces, and are usually more feasible than exhaustive and determinis- tic search algorithms. On the other hand, population based stochastic algorithms often suffer from premature convergence on mediocre sub–optimal solutions. The Age Layered Population Structure (ALPS) is a novel meta–heuristic for overcoming the problem of premature convergence in evolutionary algorithms, and for improving search in the fitness landscape. The ALPS paradigm uses an age–measure to control breeding and competition between individuals in the population. This thesis uses a modification of the ALPS GP strategy called Feature Selection ALPS (FSALPS) for feature subset selection and classification of varied supervised learning tasks. FSALPS uses a novel frequency count system to rank features in the GP population based on evolved feature frequencies. The ranked features are translated into probabilities, which are used to control evolutionary processes such as terminal–symbol selection for the construction of GP trees/sub-trees. The FSALPS meta–heuristic continuously refines the feature subset selection process whiles simultaneously evolving efficient classifiers through a non–converging evolutionary process that favors selection of features with high discrimination of class labels. We investigated and compared the performance of canonical GP, ALPS and FSALPS on high–dimensional benchmark classification datasets, including a hyperspectral image. Using Tukey’s HSD ANOVA test at a 95% confidence interval, ALPS and FSALPS dominated canonical GP in evolving smaller but efficient trees with less bloat expressions. FSALPS significantly outperformed canonical GP and ALPS and some reported feature selection strategies in related literature on dimensionality reduction.