58 resultados para clustering, free-form, ottimizzazione, remeshing
Resumo:
This project deals with the generation of profitability and the distribution of its benefits. Inspired by Davis (1947, 1955), we define profitability as the ratio of revenue to cost. Profitability is not as popular a measure of business financial performance as profit, the difference between revenue and cost. Regardless of its popularity, however, profitability is surely a useful financial performance measure. Our primary objective in this project is to identify the factors that generate change in profitability. One set of factors, which we refer to as sources, consists of changes in quantities and prices of outputs and inputs. Individual quantity changes aggregate to the overall impact of quantity change on profitability change, which we call productivity change. Individual price changes aggregate to the overall impact of price change on profitability change, which we call price recovery change. In this framework profitability change consists exclusively of productivity change and price recovery change. A second set of factors, which we refer to as drivers, consists of phenomena such as technical change, change in the efficiency of resource allocation, and the impact of economies of scale. The ability of management to harness these factors drives productivity change, which is one component of profitability change. Thus the term sources refers to quantities and prices of individual outputs and inputs, whose changes influence productivity change or price recovery change, either of which influences profitability change. The term drivers refers to phenomena related to technology and management that influence productivity change (but not price recovery change), and hence profitability change.
Resumo:
The decision to publish educational materials openly and under free licenses brings up the challenge of doing it in a sustainable way. Some lessons can be learned from the business models for production, maintenance and distribution of Free and Open Source Software. The Free Technology Academy (FTA) has taken on these challenges and has implemented some of these models. We briefly review the FTA educational programme, methodologies and organisation, and see to which extent these models are proving successful in the case of the FTA.
Resumo:
Open educational resource (OER) initiatives have made the shift from being a fringe activity to one that is increasingly considered as a key component in both teaching and learning in higher education and in the fulfilment of universities' mission and goals. Although the reduction in the cost of materials is often cited as a potential benefit of OER, this potential benefit has not yet been realised in practice necessitating thoughtful consideration of various strategies for new OER initiatives such as the OpenContent directory at the University of Cape Town (UCT) in South Africa.This paper reviews the range of sustainability strategies mentioned in the literature, plots the results of a small-scale OER sustainability survey against these strategies and explains how these findings and other papers on OER initiatives were used to inform an in-house workshop at UCT to deliberate the future strategy for the sustainability of OER at UCT.
Resumo:
In image segmentation, clustering algorithms are very popular because they are intuitive and, some of them, easy to implement. For instance, the k-means is one of the most used in the literature, and many authors successfully compare their new proposal with the results achieved by the k-means. However, it is well known that clustering image segmentation has many problems. For instance, the number of regions of the image has to be known a priori, as well as different initial seed placement (initial clusters) could produce different segmentation results. Most of these algorithms could be slightly improved by considering the coordinates of the image as features in the clustering process (to take spatial region information into account). In this paper we propose a significant improvement of clustering algorithms for image segmentation. The method is qualitatively and quantitative evaluated over a set of synthetic and real images, and compared with classical clustering approaches. Results demonstrate the validity of this new approach
Resumo:
Our purpose is to provide a set-theoretical frame to clustering fuzzy relational data basically based on cardinality of the fuzzy subsets that represent objects and their complementaries, without applying any crisp property. From this perspective we define a family of fuzzy similarity indexes which includes a set of fuzzy indexes introduced by Tolias et al, and we analyze under which conditions it is defined a fuzzy proximity relation. Following an original idea due to S. Miyamoto we evaluate the similarity between objects and features by means the same mathematical procedure. Joining these concepts and methods we establish an algorithm to clustering fuzzy relational data. Finally, we present an example to make clear all the process
Resumo:
Estudi, disseny i implementació de diferents tècniques d’agrupament defibres (clustering) per tal d’integrar a la plataforma DTIWeb diferentsalgorismes de clustering i tècniques de visualització de clústers de fibres de forma quefaciliti la interpretació de dades de DTI als especialistes
Resumo:
HEMOLIA (a project under European community’s 7th framework programme) is a new generation Anti-Money Laundering (AML) intelligent multi-agent alert and investigation system which in addition to the traditional financial data makes extensive use of modern society’s huge telecom data source, thereby opening up a new dimension of capabilities to all Money Laundering fighters (FIUs, LEAs) and Financial Institutes (Banks, Insurance Companies, etc.). This Master-Thesis project is done at AIA, one of the partners for the HEMOLIA project in Barcelona. The objective of this thesis is to find the clusters in a network drawn by using the financial data. An extensive literature survey has been carried out and several standard algorithms related to networks have been studied and implemented. The clustering problem is a NP-hard problem and several algorithms like K-Means and Hierarchical clustering are being implemented for studying several problems relating to sociology, evolution, anthropology etc. However, these algorithms have certain drawbacks which make them very difficult to implement. The thesis suggests (a) a possible improvement to the K-Means algorithm, (b) a novel approach to the clustering problem using the Genetic Algorithms and (c) a new algorithm for finding the cluster of a node using the Genetic Algorithm.
Resumo:
In the eighties, John Aitchison (1986) developed a new methodological approach for the statistical analysis of compositional data. This new methodology was implemented in Basic routines grouped under the name CODA and later NEWCODA inMatlab (Aitchison, 1997). After that, several other authors have published extensions to this methodology: Marín-Fernández and others (2000), Barceló-Vidal and others (2001), Pawlowsky-Glahn and Egozcue (2001, 2002) and Egozcue and others (2003). (...)
Resumo:
Our essay aims at studying suitable statistical methods for the clustering ofcompositional data in situations where observations are constituted by trajectories ofcompositional data, that is, by sequences of composition measurements along a domain.Observed trajectories are known as “functional data” and several methods have beenproposed for their analysis.In particular, methods for clustering functional data, known as Functional ClusterAnalysis (FCA), have been applied by practitioners and scientists in many fields. To ourknowledge, FCA techniques have not been extended to cope with the problem ofclustering compositional data trajectories. In order to extend FCA techniques to theanalysis of compositional data, FCA clustering techniques have to be adapted by using asuitable compositional algebra.The present work centres on the following question: given a sample of compositionaldata trajectories, how can we formulate a segmentation procedure giving homogeneousclasses? To address this problem we follow the steps described below.First of all we adapt the well-known spline smoothing techniques in order to cope withthe smoothing of compositional data trajectories. In fact, an observed curve can bethought of as the sum of a smooth part plus some noise due to measurement errors.Spline smoothing techniques are used to isolate the smooth part of the trajectory:clustering algorithms are then applied to these smooth curves.The second step consists in building suitable metrics for measuring the dissimilaritybetween trajectories: we propose a metric that accounts for difference in both shape andlevel, and a metric accounting for differences in shape only.A simulation study is performed in order to evaluate the proposed methodologies, usingboth hierarchical and partitional clustering algorithm. The quality of the obtained resultsis assessed by means of several indices
Resumo:
Globalization involves several facility location problems that need to be handled at large scale. Location Allocation (LA) is a combinatorial problem in which the distance among points in the data space matter. Precisely, taking advantage of the distance property of the domain we exploit the capability of clustering techniques to partition the data space in order to convert an initial large LA problem into several simpler LA problems. Particularly, our motivation problem involves a huge geographical area that can be partitioned under overall conditions. We present different types of clustering techniques and then we perform a cluster analysis over our dataset in order to partition it. After that, we solve the LA problem applying simulated annealing algorithm to the clustered and non-clustered data in order to work out how profitable is the clustering and which of the presented methods is the most suitable
Resumo:
This paper describes a systematic research about free software solutions and techniques for art imagery computer recognition problem.
Resumo:
This study engages with the debate over the mortality crises in the former Soviet Union and Central and Eastern Europe by 1) considering at length and as complementary to each other the two most prominent explanations for the post-communist mortality crisis, stress and alcohol consumption; 2) emphasizing the importance of context by exploiting systematic similarities and differences across the region. Differential mortality trajectories reveal three country groups that cluster both spatially and in terms of economic transition experiences. The first group are the countries furthest west in which mortality rates increased minimally after the transition began. The second group experienced a severe increase in mortality rates in the early 1990s, but recovered previous levels within a few years. These countries are located peripherally to Russia and its nearest neighbours. The final group consists of countries that experienced two mortality increases or in which mortality levels had not recovered to pre-transition levels well into the 21st century. Cross-sectional time-series data analyses of men’s and women’s age and cause-specific death rates reveal that the clustering of these countries and their mortality trajectories can be partially explained by the economic context, which is argued to be linked to stress and alcohol consumption. Above and beyond many basic differences in the country groups that are held constant—including geographically and historically shared cultural, lifestyle and social characteristics—poor economic conditions account for a remarkably consistent share of excess age-specific and cause-specific deaths.
Resumo:
The availability of induced pluripotent stem cells (iPSCs)has created extraordinary opportunities for modeling andperhaps treating human disease. However, all reprogrammingprotocols used to date involve the use of products of animal origin. Here, we set out to develop a protocol to generate and maintain human iPSC that would be entirelydevoid of xenobiotics. We first developed a xeno-free cellculture media that supported the long-term propagation of human embryonic stem cells (hESCs) to a similar extent as conventional media containing animal origin products or commercially available xeno-free medium. We also derivedprimary cultures of human dermal fibroblasts under strictxeno-free conditions (XF-HFF), and we show that they can be used as both the cell source for iPSC generation as well as autologous feeder cells to support their growth. We also replaced other reagents of animal origin trypsin, gelatin, matrigel) with their recombinant equivalents. Finally, we used vesicular stomatitis virus G-pseudotyped retroviral particles expressing a polycistronic construct encoding Oct4, Sox2, Klf4, and GFP to reprogram XF-HFF cells under xeno-free conditions. A total of 10 xeno-free humaniPSC lines were generated, which could be continuously passaged in xeno-free conditions and aintained characteristics indistinguishable from hESCs, including colonymorphology and growth behavior, expression of pluripotency-associated markers, and pluripotent differentiationability in vitro and in teratoma assays. Overall, the resultspresented here demonstrate that human iPSCs can be generatedand maintained under strict xeno-free conditions and provide a path to good manufacturing practice (GMP) applicability that should facilitate the clinical translation of iPSC-based therapies.