915 resultados para Model Classification
Resumo:
Le cancer testiculaire, bien que peu fréquent, revêt une importance particulière en oncologie ; il représente actuellement un modèle pour optimiser un suivi radiologique tout en essayant de diminuer l'apparition de tumeurs radio-induites.En effet, cette pathologie présente un taux très élevé de survie nécessitant, au vu du jeune âge des patients, des bilans radiologiques à long terme, auxquels pourront être liés des effets secondaires, en particulier les tumeurs secondaires.Afin de diminuer cela, les recommandations de prise en charge ont évolué et les protocoles de radiologie s'améliorent afin d'exposer à moins de rayonnements ionisants pour un résultat identique.Il est donc devenu primordial de maintenir un suivi optimal tout en essayant d'en minimiser la toxicité. Despite being rare cancers, testicular seminoma and non-seminoma play an important role in oncology: they represent a model on how to optimize radiological follow-up, aiming at a lowest possible radiation exposure and secondary cancer risk. Males diagnosed with testicular cancer undergo frequently prolonged follow-up with CT-scans with potential toxic side effects, in particular secondary cancers. To reduce the risks linked to ionizing radiation, precise follow-up protocols have been developed. The number of recommended CT-scanners has been significantly reduced over the last 10 years. The CT scanners have evolved technically and new acquisition protocols have the potential to reduce the radiation exposure further.
Resumo:
Anthropomorphic model observers are mathe- matical algorithms which are applied to images with the ultimate goal of predicting human signal detection and classification accuracy across varieties of backgrounds, image acquisitions and display conditions. A limitation of current channelized model observers is their inability to handle irregularly-shaped signals, which are common in clinical images, without a high number of directional channels. Here, we derive a new linear model observer based on convolution channels which we refer to as the "Filtered Channel observer" (FCO), as an extension of the channelized Hotelling observer (CHO) and the nonprewhitening with an eye filter (NPWE) observer. In analogy to the CHO, this linear model observer can take the form of a single template with an external noise term. To compare with human observers, we tested signals with irregular and asymmetrical shapes spanning the size of lesions down to those of microcalfications in 4-AFC breast tomosynthesis detection tasks, with three different contrasts for each case. Whereas humans uniformly outperformed conventional CHOs, the FCO observer outperformed humans for every signal with only one exception. Additive internal noise in the models allowed us to degrade model performance and match human performance. We could not match all the human performances with a model with a single internal noise component for all signal shape, size and contrast conditions. This suggests that either the internal noise might vary across signals or that the model cannot entirely capture the human detection strategy. However, the FCO model offers an efficient way to apprehend human observer performance for a non-symmetric signal.
Resumo:
The main objective of the study is to form a framework that provides tools to recognise and classify items whose demand is not smooth but varies highly on size and/or frequency. The framework will then be combined with two other classification methods in order to form a three-dimensional classification model. Forecasting and inventory control of these abnormal demand items is difficult. Therefore another object of this study is to find out which statistical forecasting method is most suitable for forecasting of abnormal demand items. The accuracy of different methods is measured by comparing the forecast to the actual demand. Moreover, the study also aims at finding proper alternatives to the inventory control of abnormal demand items. The study is quantitative and the methodology is a case study. The research methods consist of theory, numerical data, current state analysis and testing of the framework in case company. The results of the study show that the framework makes it possible to recognise and classify the abnormal demand items. It is also noticed that the inventory performance of abnormal demand items differs significantly from the performance of smoothly demanded items. This makes the recognition of abnormal demand items very important.
Resumo:
Over the past few decades, age estimation of living persons has represented a challenging task for many forensic services worldwide. In general, the process for age estimation includes the observation of the degree of maturity reached by some physical attributes, such as dentition or several ossification centers. The estimated chronological age or the probability that an individual belongs to a meaningful class of ages is then obtained from the observed degree of maturity by means of various statistical methods. Among these methods, those developed in a Bayesian framework offer to users the possibility of coherently dealing with the uncertainty associated with age estimation and of assessing in a transparent and logical way the probability that an examined individual is younger or older than a given age threshold. Recently, a Bayesian network for age estimation has been presented in scientific literature; this kind of probabilistic graphical tool may facilitate the use of the probabilistic approach. Probabilities of interest in the network are assigned by means of transition analysis, a statistical parametric model, which links the chronological age and the degree of maturity by means of specific regression models, such as logit or probit models. Since different regression models can be employed in transition analysis, the aim of this paper is to study the influence of the model in the classification of individuals. The analysis was performed using a dataset related to the ossifications status of the medial clavicular epiphysis and results support that the classification of individuals is not dependent on the choice of the regression model.
Resumo:
Objective: We used demographic and clinical data to design practical classification models for prediction of neurocognitive impairment (NCI) in people with HIV infection. Methods: The study population comprised 331 HIV-infected patients with available demographic, clinical, and neurocognitive data collected using a comprehensive battery of neuropsychological tests. Classification and regression trees (CART) were developed to btain detailed and reliable models to predict NCI. Following a practical clinical approach, NCI was considered the main variable for study outcomes, and analyses were performed separately in treatment-naïve and treatment-experienced patients. Results: The study sample comprised 52 treatment-naïve and 279 experienced patients. In the first group, the variables identified as better predictors of NCI were CD4 cell count and age (correct classification [CC]: 79.6%, 3 final nodes). In treatment-experienced patients, the variables most closely related to NCI were years of education, nadir CD4 cell count, central nervous system penetration-effectiveness score, age, employment status, and confounding comorbidities (CC: 82.1%, 7 final nodes). In patients with an undetectable viral load and no comorbidities, we obtained a fairly accurate model in which the main variables were nadir CD4 cell count, current CD4 cell count, time on current treatment, and past highest viral load (CC: 88%, 6 final nodes). Conclusion: Practical classification models to predict NCI in HIV infection can be obtained using demographic and clinical variables. An approach based on CART analyses may facilitate screening for HIV-associated neurocognitive disorders and complement clinical information about risk and protective factors for NCI in HIV-infected patients.
Resumo:
The ability to recognize a shape is linked to figure-ground (FG) organization. Cell preferences appear to be correlated across contrast-polarity reversals and mirror reversals of polygon displays, but not so much across FG reversals. Here we present a network structure which explains both shape-coding by simulated IT cells and suppression of responses to FG reversed stimuli. In our model FG segregation is achieved before shape discrimination, which is itself evidenced by the difference in spiking onsets of a pair of output cells. The studied example also includes feature extraction and illustrates a classification of binary images depending on the dominance of vertical or horizontal borders.
Resumo:
We investigate what processes may underlie heterogeneity in social preferences. We address this question by examining participants' decisions and associated response times across 12 mini-ultimatum games. Using a finite mixture model and cross-validating its classification with a response time analysis, we identified four groups of responders: one group takes little to no account of the proposed split or the foregone allocation and swiftly accepts any positive offer; two groups process primarily the objective properties of the allocations (fairness and kindness) and need more time the more properties need to be examined; and a fourth group, which takes more time than the others, appears to take into account what they would have proposed had they been put in the role of the proposer. We discuss implications of this joint decision-response time analysis.
Resumo:
Changes in the angle of illumination incident upon a 3D surface texture can significantly alter its appearance, implying variations in the image texture. These texture variations produce displacements of class members in the feature space, increasing the failure rates of texture classifiers. To avoid this problem, a model-based texture recognition system which classifies textures seen from different distances and under different illumination directions is presented in this paper. The system works on the basis of a surface model obtained by means of 4-source colour photometric stereo, used to generate 2D image textures under different illumination directions. The recognition system combines coocurrence matrices for feature extraction with a Nearest Neighbour classifier. Moreover, the recognition allows one to guess the approximate direction of the illumination used to capture the test image
Resumo:
We propose a probabilistic object classifier for outdoor scene analysis as a first step in solving the problem of scene context generation. The method begins with a top-down control, which uses the previously learned models (appearance and absolute location) to obtain an initial pixel-level classification. This information provides us the core of objects, which is used to acquire a more accurate object model. Therefore, their growing by specific active regions allows us to obtain an accurate recognition of known regions. Next, a stage of general segmentation provides the segmentation of unknown regions by a bottom-strategy. Finally, the last stage tries to perform a region fusion of known and unknown segmented objects. The result is both a segmentation of the image and a recognition of each segment as a given object class or as an unknown segmented object. Furthermore, experimental results are shown and evaluated to prove the validity of our proposal
Resumo:
The topological solitons of two classical field theories, the Faddeev-Skyrme model and the Ginzburg-Landau model are studied numerically and analytically in this work. The aim is to gain information on the existence and properties of these topological solitons, their structure and behaviour under relaxation. First, the conditions and mechanisms leading to the possibility of topological solitons are explored from the field theoretical point of view. This leads one to consider continuous deformations of the solutions of the equations of motion. The results of algebraic topology necessary for the systematic treatment of such deformations are reviewed and methods of determining the homotopy classes of topological solitons are presented. The Faddeev-Skyrme and Ginzburg-Landau models are presented, some earlier results reviewed and the numerical methods used in this work are described. The topological solitons of the Faddeev-Skyrme model, Hopfions, are found to follow the same mechanisms of relaxation in three different domains with three different topological classifications. For two of the domains, the necessary but unusual topological classification is presented. Finite size topological solitons are not found in the Ginzburg-Landau model and a scaling argument is used to suggest that there are indeed none unless a certain modification to the model, due to R. S. Ward, is made. In that case, the Hopfions of the Faddeev-Skyrme model are seen to be present for some parameter values. A boundary in the parameter space separating the region where the Hopfions exist and the area where they do not exist is found and the behaviour of the Hopfion energy on this boundary is studied.
Resumo:
Female sexual dysfunctions, including desire, arousal, orgasm and pain problems, have been shown to be highly prevalent among women around the world. The etiology of these dysfunctions is unclear but associations with health, age, psychological problems, and relationship factors have been identified. Genetic effects explain individual variation in orgasm function to some extent but until now quantitative behavior genetic analyses have not been applied to other sexual functions. In addition, behavior genetics can be applied to exploring the cause of any observed comorbidity between the dysfunctions. Discovering more about the etiology of the dysfunctions may further improve the classification systems which are currently under intense debate. The aims of the present thesis were to evaluate the psychometric properties of a Finnish-language version of a commonly used questionnaire for measuring female sexual function, the Female Sexual Function Index (FSFI), in order to investigate prevalence, comorbidity, and classification, and to explore the balance of genetic and environmental factors in the etiology as well as the associations of a number of biopsychosocial factors with female sexual functions. Female sexual functions were studied through survey methods in a population based sample of Finnish twins and their female siblings. There were two waves of data collection. The first data collection targeted 5,000 female twins aged 33–43 years and the second 7,680 female twins aged 18–33 and their over 18–year-old female siblings (n = 3,983). There was no overlap between the data collections. The combined overall response rate for both data collections was 53% (n = 8,868), with a better response rate in the second (57%) compared to the first (45%). In order to measure female sexual function, the FSFI was used. It includes 19 items which measure female sexual function during the previous four weeks in six subdomains; desire, subjective arousal, lubrication, orgasm, sexual satisfaction, and pain. In line with earlier research in clinical populations, a six factor solution of the Finnish-language version of the FSFI received supported. The internal consistencies of the scales were good to excellent. Some questions about how to avoid overestimating the prevalence of extreme dysfunctions due to women being allocated the score of zero if they had had no sexual activity during the preceding four weeks were raised. The prevalence of female sexual dysfunctions per se ranged from 11% for lubrication dysfunction to 55% for desire dysfunction. The prevalence rates for sexual dysfunction with concomitant sexual distress, in other words, sexual disorders were notably lower ranging from 7% for lubrication disorder to 23% for desire disorder. The comorbidity between the dysfunctions was substantial most notably between arousal and lubrication dysfunction even if these two dysfunctions showed distinct patterns of associations with the other dysfunctions. Genetic influences on individual variation in the six subdomains of FSFI were modest but significant ranging from 3–11% for additive genetic effects and 5–18% for nonadditive genetic effects. The rest of the variation in sexual functions was explained by nonshared environmental influences. A correlated factor model, including additive and nonadditive genetic effects and nonshared environmental effects had the best fit. All in all, every correlation between the genetic factors was significant except between lubrication and pain. All correlations between the nonshared environment factors were significant showing that there is a substantial overlap in genetic and nonshared environmental influences between the dysfunctions. In general, psychological problems, poor satisfaction with the relationship, sexual distress, and poor partner compatibility were associated with more sexual dysfunctions. Age was confounded with relationship length but had over and above relationship length a negative effect on desire and sexual satisfaction and a positive effect on orgasm and pain functions. Alcohol consumption in general was associated with better desire, arousal, lubrication, and orgasm function. Women pregnant with their first child had fewer pain problems than nulliparous nonpregnant women. Multiparous pregnant women had more orgasm problems compared to multiparous nonpregnant women. Having children was associated with less orgasm and pain problems. The conclusions were that desire, subjective arousal, lubrication, orgasm, sexual satisfaction, and pain are separate entities that have distinct associations with a number of different biopsychosocial factors. However, there is also considerable comorbidity between the dysfunctions which are explained by overlap in additive genetic, nonadditive genetic and nonshared environmental influences. Sexual dysfunctions are highly prevalent and are not always associated with sexual distress and this relationship might be moderated by a good relationship and compatibility with partner. Regarding classification, the results supports separate diagnoses for subjective arousal and genital arousal as well as the inclusion of pain under sexual dysfunctions.
Resumo:
ABSTRACT Geographic Information System (GIS) is an indispensable software tool in forest planning. In forestry transportation, GIS can manage the data on the road network and solve some problems in transportation, such as route planning. Therefore, the aim of this study was to determine the pattern of the road network and define transport routes using GIS technology. The present research was conducted in a forestry company in the state of Minas Gerais, Brazil. The criteria used to classify the pattern of forest roads were horizontal and vertical geometry, and pavement type. In order to determine transport routes, a data Analysis Model Network was created in ArcGIS using an Extension Network Analyst, allowing finding a route shorter in distance and faster. The results showed a predominance of horizontal geometry classes average (3) and bad (4), indicating presence of winding roads. In the case of vertical geometry criterion, the class of highly mountainous relief (4) possessed the greatest extent of roads. Regarding the type of pavement, the occurrence of secondary coating was higher (75%), followed by primary coating (20%) and asphalt pavement (5%). The best route was the one that allowed the transport vehicle travel in a higher specific speed as a function of road pattern found in the study.
Resumo:
This thesis studies the development of service offering model that creates added-value for customers in the field of logistics services. The study focusses on offering classification and structures of model. The purpose of model is to provide value-added solutions for customers and enable superior service experience. The aim of thesis is to define what customers expect from logistics solution provider and what value customers appreciate so greatly that they could invest in value-added services. Value propositions, costs structures of offerings and appropriate pricing methods are studied. First, literature review of creating solution business model and customer value is conducted. Customer value is found out with customer interviews and qualitative empiric data is used. To exploit expertise knowledge of logistics, innovation workshop tool is utilized. Customers and experts are involved in the design process of model. As a result of thesis, three-level value-added service offering model is created based on empiric and theoretical data. Offerings with value propositions are proposed and the level of model reflects the deepness of customer-provider relationship and the amount of added value. Performance efficiency improvements and cost savings create the most added value for customers. Value-based pricing methods, such as performance-based models are suggested to apply. Results indicate the interest of benefitting networks and partnership in field of logistics services. Networks development is proposed to be investigated further.
Resumo:
Open innovation paradigm states that the boundaries of the firm have become permeable, allowing knowledge to flow inwards and outwards to accelerate internal innovations and take unused knowledge to the external environment; respectively. The successful implementation of open innovation practices in firms like Procter & Gamble, IBM, and Xerox, among others; suggest that it is a sustainable trend which could provide basis for achieving competitive advantage. However, implementing open innovation could be a complex process which involves several domains of management; and whose term, classification, and practices have not totally been agreed upon. Thus, with many possible ways to address open innovation, the following research question was formulated: How could Ericsson LMF assess which open innovation mode to select depending on the attributes of the project at hand? The research followed the constructive research approach which has the following steps: find a practical relevant problem, obtain general understanding of the topic, innovate the solution, demonstrate the solution works, show theoretical contributions, and examine the scope of applicability of the solution. The research involved three phases of data collection and analysis: Extensive literature review of open innovation, strategy, business model, innovation, and knowledge management; direct observation of the environment of the case company through participative observation; and semi-structured interviews based of six cases involving multiple and heterogeneous open innovation initiatives. Results from the cases suggest that the selection of modes depend on multiple reasons, with a stronger influence of factors related to strategy, business models, and resources gaps. Based on these and others factors found in the literature review and observations; it was possible to construct a model that supports approaching open innovation. The model integrates perspectives from multiple domains of the literature review, observations inside the case company, and factors from the six open innovation cases. It provides steps, guidelines, and tools to approach open innovation and assess the selection of modes. Measuring the impact of open innovation could take years; thus, implementing and testing entirely the model was not possible due time limitation. Nevertheless, it was possible to validate the core elements of the model with empirical data gathered from the cases. In addition to constructing the model, this research contributed to the literature by increasing the understanding of open innovation, providing suggestions to the case company, and proposing future steps.
Resumo:
The objective of this thesis is to develop and generalize further the differential evolution based data classification method. For many years, evolutionary algorithms have been successfully applied to many classification tasks. Evolution algorithms are population based, stochastic search algorithms that mimic natural selection and genetics. Differential evolution is an evolutionary algorithm that has gained popularity because of its simplicity and good observed performance. In this thesis a differential evolution classifier with pool of distances is proposed, demonstrated and initially evaluated. The differential evolution classifier is a nearest prototype vector based classifier that applies a global optimization algorithm, differential evolution, to determine the optimal values for all free parameters of the classifier model during the training phase of the classifier. The differential evolution classifier applies the individually optimized distance measure for each new data set to be classified is generalized to cover a pool of distances. Instead of optimizing a single distance measure for the given data set, the selection of the optimal distance measure from a predefined pool of alternative measures is attempted systematically and automatically. Furthermore, instead of only selecting the optimal distance measure from a set of alternatives, an attempt is made to optimize the values of the possible control parameters related with the selected distance measure. Specifically, a pool of alternative distance measures is first created and then the differential evolution algorithm is applied to select the optimal distance measure that yields the highest classification accuracy with the current data. After determining the optimal distance measures for the given data set together with their optimal parameters, all determined distance measures are aggregated to form a single total distance measure. The total distance measure is applied to the final classification decisions. The actual classification process is still based on the nearest prototype vector principle; a sample belongs to the class represented by the nearest prototype vector when measured with the optimized total distance measure. During the training process the differential evolution algorithm determines the optimal class vectors, selects optimal distance metrics, and determines the optimal values for the free parameters of each selected distance measure. The results obtained with the above method confirm that the choice of distance measure is one of the most crucial factors for obtaining higher classification accuracy. The results also demonstrate that it is possible to build a classifier that is able to select the optimal distance measure for the given data set automatically and systematically. After finding optimal distance measures together with optimal parameters from the particular distance measure results are then aggregated to form a total distance, which will be used to form the deviation between the class vectors and samples and thus classify the samples. This thesis also discusses two types of aggregation operators, namely, ordered weighted averaging (OWA) based multi-distances and generalized ordered weighted averaging (GOWA). These aggregation operators were applied in this work to the aggregation of the normalized distance values. The results demonstrate that a proper combination of aggregation operator and weight generation scheme play an important role in obtaining good classification accuracy. The main outcomes of the work are the six new generalized versions of previous method called differential evolution classifier. All these DE classifier demonstrated good results in the classification tasks.