881 resultados para network performance


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Traditional supervised data classification considers only physical features (e. g., distance or similarity) of the input data. Here, this type of learning is called low level classification. On the other hand, the human (animal) brain performs both low and high orders of learning and it has facility in identifying patterns according to the semantic meaning of the input data. Data classification that considers not only physical attributes but also the pattern formation is, here, referred to as high level classification. In this paper, we propose a hybrid classification technique that combines both types of learning. The low level term can be implemented by any classification technique, while the high level term is realized by the extraction of features of the underlying network constructed from the input data. Thus, the former classifies the test instances by their physical features or class topologies, while the latter measures the compliance of the test instances to the pattern formation of the data. Our study shows that the proposed technique not only can realize classification according to the pattern formation, but also is able to improve the performance of traditional classification techniques. Furthermore, as the class configuration's complexity increases, such as the mixture among different classes, a larger portion of the high level term is required to get correct classification. This feature confirms that the high level classification has a special importance in complex situations of classification. Finally, we show how the proposed technique can be employed in a real-world application, where it is capable of identifying variations and distortions of handwritten digit images. As a result, it supplies an improvement in the overall pattern recognition rate.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The concept of industrial clustering has been studied in-depth by policy makers and researchers from many fields, mainly due to the competitive advantages it may bring to regional economies. Companies often take part in collaborative initiatives with local partners while also taking advantage of knowledge spillovers to benefit from locating in a cluster. Thus, Knowledge Management (KM) and Performance Management (PM) have become relevant topics for policy makers and cluster associations when undertaking collaborative initiatives. Taking this into account, this paper aims to explore the interplay between both topics using a case study conducted in a collaborative network formed within a cluster. The results show that KM should be acknowledged as a formal area of cluster management so that PM practices can support knowledge-oriented initiatives and therefore make better use of the new knowledge created. Furthermore, tacit and explicit knowledge resulting from PM practices needs to be stored and disseminated throughout the cluster as a way of improving managerial practices and regional strategic direction. Knowledge Management Research & Practice (2012) 10, 368-379. doi:10.1057/kmrp.2012.23

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A neural network model to predict ozone concentration in the Sao Paulo Metropolitan Area was developed, based on average values of meteorological variables in the morning (8:00-12:00 hr) and afternoon (13:00-17: 00 hr) periods. Outputs are the maximum and average ozone concentrations in the afternoon (12:00-17:00 hr). The correlation coefficient between computed and measured values was 0.82 and 0.88 for the maximum and average ozone concentration, respectively. The model presented good performance as a prediction tool for the maximum ozone concentration. For prediction periods from 1 to 5 days 0 to 23% failures (95% confidence) were obtained.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Brazilian network for genotyping is composed of 21 laboratories that perform and analyze genotyping tests for all HIV-infected patients within the public system, performing approximately 25,000 tests per year. We assessed the interlaboratory and intralaboratory reproducibility of genotyping systems by creating and implementing a local external quality control evaluation. Plasma samples from HIV-1-infected individuals (with low and intermediate viral loads) or RNA viral constructs with specific mutations were used. This evaluation included analyses of sensitivity and specificity of the tests based on qualitative and quantitative criteria, which scored laboratory performance on a 100-point system. Five evaluations were performed from 2003 to 2008, with 64% of laboratories scoring over 80 points in 2003, 81% doing so in 2005, 56% in 2006, 91% in 2007, and 90% in 2008 (Kruskal-Wallis, p = 0.003). Increased performance was aided by retraining laboratories that had specific deficiencies. The results emphasize the importance of investing in laboratory training and interpretation of DNA sequencing results, especially in developing countries where public (or scarce) resources are used to manage the AIDS epidemic.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The need for high bandwidth, due to the explosion of new multi\-media-oriented IP-based services, as well as increasing broadband access requirements is leading to the need of flexible and highly reconfigurable optical networks. While transmission bandwidth does not represent a limit due to the huge bandwidth provided by optical fibers and Dense Wavelength Division Multiplexing (DWDM) technology, the electronic switching nodes in the core of the network represent the bottleneck in terms of speed and capacity for the overall network. For this reason DWDM technology must be exploited not only for data transport but also for switching operations. In this Ph.D. thesis solutions for photonic packet switches, a flexible alternative with respect to circuit-switched optical networks are proposed. In particular solutions based on devices and components that are expected to mature in the near future are proposed, with the aim to limit the employment of complex components. The work presented here is the result of part of the research activities performed by the Networks Research Group at the Department of Electronics, Computer Science and Systems (DEIS) of the University of Bologna, Italy. In particular, the work on optical packet switching has been carried on within three relevant research projects: the e-Photon/ONe and e-Photon/ONe+ projects, funded by the European Union in the Sixth Framework Programme, and the national project OSATE funded by the Italian Ministry of Education, University and Scientific Research. The rest of the work is organized as follows. Chapter 1 gives a brief introduction to network context and contention resolution in photonic packet switches. Chapter 2 presents different strategies for contention resolution in wavelength domain. Chapter 3 illustrates a possible implementation of one of the schemes proposed in chapter 2. Then, chapter 4 presents multi-fiber switches, which employ jointly wavelength and space domains to solve contention. Chapter 5 shows buffered switches, to solve contention in time domain besides wavelength domain. Finally chapter 6 presents a cost model to compare different switch architectures in terms of cost.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Clusters have increasingly become an essential part of policy discourses at all levels, EU, national, regional, dealing with regional development, competitiveness, innovation, entrepreneurship, SMEs. These impressive efforts in promoting the concept of clusters on the policy-making arena have been accompanied by much less academic and scientific research work investigating the actual economic performance of firms in clusters, the design and execution of cluster policies and going beyond singular case studies to a more methodologically integrated and comparative approach to the study of clusters and their real-world impact. The theoretical background is far from being consolidated and there is a variety of methodologies and approaches for studying and interpreting this phenomenon while at the same time little comparability among studies on actual cluster performances. The conceptual framework of clustering suggests that they affect performance but theory makes little prediction as to the ultimate distribution of the value being created by clusters. This thesis takes the case of Eastern European countries for two reasons. One is that clusters, as coopetitive environments, are a new phenomenon as the previous centrally-based system did not allow for such types of firm organizations. The other is that, as new EU member states, they have been subject to the increased popularization of the cluster policy approach by the European Commission, especially in the framework of the National Reform Programmes related to the Lisbon objectives. The originality of the work lays in the fact that starting from an overview of theoretical contributions on clustering, it offers a comparative empirical study of clusters in transition countries. There have been very few examples in the literature that attempt to examine cluster performance in a comparative cross-country perspective. It adds to this an analysis of cluster policies and their implementation or lack of such as a way to analyse the way the cluster concept has been introduced to transition economies. Our findings show that the implementation of cluster policies does vary across countries with some countries which have embraced it more than others. The specific modes of implementation, however, are very similar, based mostly on soft measures such as funding for cluster initiatives, usually directed towards the creation of cluster management structures or cluster facilitators. They are essentially founded on a common assumption that the added values of clusters is in the creation of linkages among firms, human capital, skills and knowledge at the local level, most often perceived as the regional level. Often times geographical proximity is not a necessary element in the application process and cluster application are very similar to network membership. Cluster mapping is rarely a factor in the selection of cluster initiatives for funding and the relative question about critical mass and expected outcomes is not considered. In fact, monitoring and evaluation are not elements of the cluster policy cycle which have received a lot of attention. Bulgaria and the Czech Republic are the countries which have implemented cluster policies most decisively, Hungary and Poland have made significant efforts, while Slovakia and Romania have only sporadically and not systematically used cluster initiatives. When examining whether, in fact, firms located within regional clusters perform better and are more efficient than similar firms outside clusters, we do find positive results across countries and across sectors. The only country with negative impact from being located in a cluster is the Czech Republic.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this thesis the use of widefield imaging techniques and VLBI observations with a limited number of antennas are explored. I present techniques to efficiently and accurately image extremely large UV datasets. Very large VLBI datasets must be reduced into multiple, smaller datasets if today’s imaging algorithms are to be used to image them. I present a procedure for accurately shifting the phase centre of a visibility dataset. This procedure has been thoroughly tested and found to be almost two orders of magnitude more accurate than existing techniques. Errors have been found at the level of one part in 1.1 million. These are unlikely to be measurable except in the very largest UV datasets. Results of a four-station VLBI observation of a field containing multiple sources are presented. A 13 gigapixel image was constructed to search for sources across the entire primary beam of the array by generating over 700 smaller UV datasets. The source 1320+299A was detected and its astrometric position with respect to the calibrator J1329+3154 is presented. Various techniques for phase calibration and imaging across this field are explored including using the detected source as an in-beam calibrator and peeling of distant confusing sources from VLBI visibility datasets. A range of issues pertaining to wide-field VLBI have been explored including; parameterising the wide-field performance of VLBI arrays; estimating the sensitivity across the primary beam both for homogeneous and heterogeneous arrays; applying techniques such as mosaicing and primary beam correction to VLBI observations; quantifying the effects of time-average and bandwidth smearing; and calibration and imaging of wide-field VLBI datasets. The performance of a computer cluster at the Istituto di Radioastronomia in Bologna has been characterised with regard to its ability to correlate using the DiFX software correlator. Using existing software it was possible to characterise the network speed particularly for MPI applications. The capabilities of the DiFX software correlator, running on this cluster, were measured for a range of observation parameters and were shown to be commensurate with the generic performance parameters measured. The feasibility of an Italian VLBI array has been explored, with discussion of the infrastructure required, the performance of such an array, possible collaborations, and science which could be achieved. Results from a 22 GHz calibrator survey are also presented. 21 out of 33 sources were detected on a single baseline between two Italian antennas (Medicina to Noto). The results and discussions presented in this thesis suggest that wide-field VLBI is a technique whose time has finally come. Prospects for exciting new science are discussed in the final chapter.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Multi-Processor SoC (MPSOC) design brings to the foreground a large number of challenges, one of the most prominent of which is the design of the chip interconnection. With a number of on-chip blocks presently ranging in the tens, and quickly approaching the hundreds, the novel issue of how to best provide on-chip communication resources is clearly felt. Scaling down of process technologies has increased process and dynamic variations as well as transistor wearout. Because of this, delay variations increase and impact the performance of the MPSoCs. The interconnect architecture inMPSoCs becomes a single point of failure as it connects all other components of the system together. A faulty processing element may be shut down entirely, but the interconnect architecture must be able to tolerate partial failure and variations and operate with performance, power or latency overhead. This dissertation focuses on techniques at different levels of abstraction to face with the reliability and variability issues in on-chip interconnection networks. By showing the test results of a GALS NoC testchip this dissertation motivates the need for techniques to detect and work around manufacturing faults and process variations in MPSoCs’ interconnection infrastructure. As a physical design technique, we propose the bundle routing framework as an effective way to route the Network on Chips’ global links. For architecture-level design, two cases are addressed: (I) Intra-cluster communication where we propose a low-latency interconnect with variability robustness (ii) Inter-cluster communication where an online functional testing with a reliable NoC configuration are proposed. We also propose dualVdd as an orthogonal way of compensating variability at the post-fabrication stage. This is an alternative strategy with respect to the design techniques, since it enforces the compensation at post silicon stage.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The evaluation of structural performance of existing concrete buildings, built according to standards and materials quite different to those available today, requires procedures and methods able to cover lack of data about mechanical material properties and reinforcement detailing. To this end detailed inspections and test on materials are required. As a consequence tests on drilled cores are required; on the other end, it is stated that non-destructive testing (NDT) cannot be used as the only mean to get structural information, but can be used in conjunction with destructive testing (DT) by a representative correlation between DT and NDT. The aim of this study is to verify the accuracy of some formulas of correlation available in literature between measured parameters, i.e. rebound index, ultrasonic pulse velocity and compressive strength (SonReb Method). To this end a relevant number of DT and NDT tests has been performed on many school buildings located in Cesena (Italy). The above relationships have been assessed on site correlating NDT results to strength of core drilled in adjacent locations. Nevertheless, concrete compressive strength assessed by means of NDT methods and evaluated with correlation formulas has the advantage of being able to be implemented and used for future applications in a much more simple way than other methods, even if its accuracy is strictly limited to the analysis of concretes having the same characteristics as those used for their calibration. This limitation warranted a search for a different evaluation method for the non-destructive parameters obtained on site. To this aim, the methodology of neural identification of compressive strength is presented. Artificial Neural Network (ANN) suitable for the specific analysis were chosen taking into account the development presented in the literature in this field. The networks were trained and tested in order to detect a more reliable strength identification methodology.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Five different methods were critically examined to characterize the pore structure of the silica monoliths. The mesopore characterization was performed using: a) the classical BJH method of nitrogen sorption data, which showed overestimated values in the mesopore distribution and was improved by using the NLDFT method, b) the ISEC method implementing the PPM and PNM models, which were especially developed for monolithic silicas, that contrary to the particulate supports, demonstrate the two inflection points in the ISEC curve, enabling the calculation of pore connectivity, a measure for the mass transfer kinetics in the mesopore network, c) the mercury porosimetry using a new recommended mercury contact angle values. rnThe results of the characterization of mesopores of monolithic silica columns by the three methods indicated that all methods were useful with respect to the pore size distribution by volume, but only the ISEC method with implemented PPM and PNM models gave the average pore size and distribution based on the number average and the pore connectivity values.rnThe characterization of the flow-through pore was performed by two different methods: a) the mercury porosimetry, which was used not only for average flow-through pore value estimation, but also the assessment of entrapment. It was found that the mass transfer from the flow-through pores to mesopores was not hindered in case of small sized flow-through pores with a narrow distribution, b) the liquid penetration where the average flow-through pore values were obtained via existing equations and improved by the additional methods developed according to Hagen-Poiseuille rules. The result was that not the flow-through pore size influences the column bock pressure, but the surface area to volume ratio of silica skeleton is most decisive. Thus the monolith with lowest ratio values will be the most permeable. rnThe flow-through pore characterization results obtained by mercury porosimetry and liquid permeability were compared with the ones from imaging and image analysis. All named methods enable a reliable characterization of the flow-through pore diameters for the monolithic silica columns, but special care should be taken about the chosen theoretical model.rnThe measured pore characterization parameters were then linked with the mass transfer properties of monolithic silica columns. As indicated by the ISEC results, no restrictions in mass transfer resistance were noticed in mesopores due to their high connectivity. The mercury porosimetry results also gave evidence that no restrictions occur for mass transfer from flow-through pores to mesopores in the small scaled silica monoliths with narrow distribution. rnThe prediction of the optimum regimes of the pore structural parameters for the given target parameters in HPLC separations was performed. It was found that a low mass transfer resistance in the mesopore volume is achieved when the nominal diameter of the number average size distribution of the mesopores is appr. an order of magnitude larger that the molecular radius of the analyte. The effective diffusion coefficient of an analyte molecule in the mesopore volume is strongly dependent on the value of the nominal pore diameter of the number averaged pore size distribution. The mesopore size has to be adapted to the molecular size of the analyte, in particular for peptides and proteins. rnThe study on flow-through pores of silica monoliths demonstrated that the surface to volume of the skeletons ratio and external porosity are decisive for the column efficiency. The latter is independent from the flow-through pore diameter. The flow-through pore characteristics by direct and indirect approaches were assessed and theoretical column efficiency curves were derived. The study showed that next to the surface to volume ratio, the total porosity and its distribution of the flow-through pores and mesopores have a substantial effect on the column plate number, especially as the extent of adsorption increases. The column efficiency is increasing with decreasing flow through pore diameter, decreasing with external porosity, and increasing with total porosity. Though this tendency has a limit due to heterogeneity of the studied monolithic samples. We found that the maximum efficiency of the studied monolithic research columns could be reached at a skeleton diameter of ~ 0.5 µm. Furthermore when the intention is to maximize the column efficiency, more homogeneous monoliths should be prepared.rn

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The market’s challenges bring firms to collaborate with other organizations in order to create Joint Ventures, Alliances and Consortia that are defined as “Interorganizational Networks” (IONs) (Provan, Fish and Sydow; 2007). Some of these IONs are managed through a shared partecipant governance (Provan and Kenis, 2008): a team composed by entrepreneurs and/or directors of each firm of an ION. The research is focused on these kind of management teams and it is based on an input-process-output model: some input variables (work group’s diversity, intra-team's friendship network density) have a direct influence on the process (team identification, shared leadership, interorganizational trust, team trust and intra-team's communication network density), which influence some team outputs, individual innovation behaviors and team effectiveness (team performance, work group satisfaction and ION affective commitment). Data was collected on a sample of 101 entrepreneurs grouped in 28 ION’s government teams and the research hypotheses are tested trough the path analysis and the multilevel models. As expected trust in team and shared leadership are positively and directly related to team effectiveness while team identification and interorganizational trust are indirectly related to the team outputs. The friendship network density among the team’s members has got positive effects on the trust in team and on the communication network density, and also, through the communication network density it improves the level of the teammates ION affective commitment. The shared leadership and its effects on the team effectiveness are fostered from higher level of team identification and weakened from higher level of work group diversity, specifically gender diversity. Finally, the communication network density and shared leadership at the individual level are related to the frequency of individual innovative behaviors. The dissertation’s results give a wider and more precise indication about the management of interfirm network through “shared” form of governance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Le reti ottiche, grazie alla loro elevata capacità, hanno acquisito sempre maggiore importanza negli ultimi anni, sia per via del crescente volume di dati scambiati, legato soprattutto alla larga diffusione di Internet, sia per la necessità di comunicazioni in tempo reale. Dati i (relativamente) lunghi tempi di adattamento, questa tecnologia nativamente non è quella ottimale per il trasporto di un traffico a burst, tipico delle telecomunicazioni odierne. Le reti ibride cercano, quindi, di coniugare al meglio le potenzialità della commutazione ottica di circuito e della commutazione ottica a pacchetto. In questo lavoro, in particolare, ci si è concentrati su un'architettura di rete ibrida denominata 3LIHON (3-Level Integrated Hybrid Optical Network). Essa prevede tre distinti livelli di qualità di servizio (QoS) per soddisfare differenti necessità: - Guaranteed Service Type (GST): simile ad un servizio a commutazione di circuito, non ammette perdita di dati. - Statistically Multiplexed Real Time (SM/RT): simile ad un servizio a commutazione di pacchetto, garantisce ritardo nullo o molto basso all'interno della rete, permette un piccolo tasso di perdita di dati e ammette la contesa della banda. - Statistically Multiplexed Best Effort (SM/BE): simile ad un servizio a commutazione di pacchetto, non garantisce alcun ritardo tra i nodi ed ammette un basso tasso di perdita dei dati. In un nodo 3LIHON, il traffico SM/BE impossibile da servire, a causa ad es. dell'interruzione da parte di pacchetti aventi un livello di QoS più prioritario, viene irrimediabilmente perso. Questo implica anche lo spreco del tempo e delle risorse impiegati per trasmettere un pacchetto SM/BE fino alla sua interruzione. Nel presente lavoro si è cercato di limitare, per quanto possibile, questo comportamento sconveniente, adottando e comparando tre strategie, che hanno portato alla modifica del nodo 3LIHON standard ed alla nascita di tre sue varianti.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis presents a new Artificial Neural Network (ANN) able to predict at once the main parameters representative of the wave-structure interaction processes, i.e. the wave overtopping discharge, the wave transmission coefficient and the wave reflection coefficient. The new ANN has been specifically developed in order to provide managers and scientists with a tool that can be efficiently used for design purposes. The development of this ANN started with the preparation of a new extended and homogeneous database that collects all the available tests reporting at least one of the three parameters, for a total amount of 16’165 data. The variety of structure types and wave attack conditions in the database includes smooth, rock and armour unit slopes, berm breakwaters, vertical walls, low crested structures, oblique wave attacks. Some of the existing ANNs were compared and improved, leading to the selection of a final ANN, whose architecture was optimized through an in-depth sensitivity analysis to the training parameters of the ANN. Each of the selected 15 input parameters represents a physical aspect of the wave-structure interaction process, describing the wave attack (wave steepness and obliquity, breaking and shoaling factors), the structure geometry (submergence, straight or non-straight slope, with or without berm or toe, presence or not of a crown wall), or the structure type (smooth or covered by an armour layer, with permeable or impermeable core). The advanced ANN here proposed provides accurate predictions for all the three parameters, and demonstrates to overcome the limits imposed by the traditional formulae and approach adopted so far by some of the existing ANNs. The possibility to adopt just one model to obtain a handy and accurate evaluation of the overall performance of a coastal or harbor structure represents the most important and exportable result of the work.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Obiettivo del lavoro è quello di legare tra di loro due aspetti che storicamente sono sempre stati scollegati. Il primo è il lungo dibattito sul tema “oltre il PIL”, che prosegue ininterrottamente da circa mezzo secolo. Il secondo riguarda l’utilizzo dei sistemi di misurazione e valutazione della performance nel settore pubblico italiano. Si illustra l’evoluzione del dibattito sul PIL facendo un excursus storico del pensiero critico che si è sviluppato nel corso di circa cinquanta anni analizzando le ragioni assunte dagli studiosi per confutare l’utilizzo del PIL quale misura universale del benessere. Cogliendo questa suggestione l’Istat, in collaborazione con il CNEL, ha avviato un progetto per individuare nuovi indicatori da affiancare al PIL, in grado di misurare il livello non solo della crescita economica, ma anche del benessere sociale e sostenibile, con l’analisi degli indicatori riferiti a 12 domini di benessere individuati. Al progetto Istat-CNEL si è affiancato il progetto UrBES, promosso dall’Istat e dal Coordinamento dei sindaci metropolitani dell’ANCI, che hanno costituito una rete di città metropolitane per sperimentare la misurazione e il confronto sulla base di indicatori di benessere urbano equo e sostenibile, facendo proprio un progetto del Comune di Bologna e di Laboratorio Urbano (Centro di documentazione, ricerca e proposta sulle città), che ha sottoposto a differenti target un questionario on line, i cui risultati, con riferimento alle risposte fornite alle domande aperte, sono stati elaborati attraverso l’utilizzo di Taltac, un software per l’analisi dei testi, al fine di individuare i “profili” dei rispondenti, associando i risultati dell’elaborazione alle variabili strutturali del questionario. Nell’ultima parte i servizi e progetti erogati dal comune di Bologna sono stati associati alle dimensioni UrBES, per valutare l’impatto delle politiche pubbliche sulla qualità della vita e sul benessere dei cittadini, indicando le criticità legate alla mancanza di dati adeguati.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Il presente lavoro di tesi si inserisce nell’ambito della classificazione di dati ad alta dimensionalità, sviluppando un algoritmo basato sul metodo della Discriminant Analysis. Esso classifica i campioni attraverso le variabili prese a coppie formando un network a partire da quelle che hanno una performance sufficientemente elevata. Successivamente, l’algoritmo si avvale di proprietà topologiche dei network (in particolare la ricerca di subnetwork e misure di centralità di singoli nodi) per ottenere varie signature (sottoinsiemi delle variabili iniziali) con performance ottimali di classificazione e caratterizzate da una bassa dimensionalità (dell’ordine di 101, inferiore di almeno un fattore 103 rispetto alle variabili di partenza nei problemi trattati). Per fare ciò, l’algoritmo comprende una parte di definizione del network e un’altra di selezione e riduzione della signature, calcolando ad ogni passaggio la nuova capacità di classificazione operando test di cross-validazione (k-fold o leave- one-out). Considerato l’alto numero di variabili coinvolte nei problemi trattati – dell’ordine di 104 – l’algoritmo è stato necessariamente implementato su High-Performance Computer, con lo sviluppo in parallelo delle parti più onerose del codice C++, nella fattispecie il calcolo vero e proprio del di- scriminante e il sorting finale dei risultati. L’applicazione qui studiata è a dati high-throughput in ambito genetico, riguardanti l’espressione genica a livello cellulare, settore in cui i database frequentemente sono costituiti da un numero elevato di variabili (104 −105) a fronte di un basso numero di campioni (101 −102). In campo medico-clinico, la determinazione di signature a bassa dimensionalità per la discriminazione e classificazione di campioni (e.g. sano/malato, responder/not-responder, ecc.) è un problema di fondamentale importanza, ad esempio per la messa a punto di strategie terapeutiche personalizzate per specifici sottogruppi di pazienti attraverso la realizzazione di kit diagnostici per l’analisi di profili di espressione applicabili su larga scala. L’analisi effettuata in questa tesi su vari tipi di dati reali mostra che il metodo proposto, anche in confronto ad altri metodi esistenti basati o me- no sull’approccio a network, fornisce performance ottime, tenendo conto del fatto che il metodo produce signature con elevate performance di classifica- zione e contemporaneamente mantenendo molto ridotto il numero di variabili utilizzate per questo scopo.