927 resultados para metodo ECM sistemi di rating


Relevância:

100.00% 100.00%

Publicador:

Resumo:

I sistemi di versionamento moderni quali "git" o "svn" sono ad oggi basati su svariati algoritmi di analisi delle differenze (detti algoritmi di diffing) tra documenti (detti versioni). Uno degli algoritmi impiegati con maggior successo a tal proposito è il celebre "diff" di Unix. Tale programma è in grado di rilevare le modifiche necessarie da apportare ad un documento al fine di ottenerne un altro in termini di aggiunta o rimozione di linee di testo. L'insieme di tali modifiche prende nome di "delta". La crescente richiesta e applicazione dei documenti semi-strutturati (ed in particolar modo dei documenti XML) da parte della comunità informatica soprattutto in ambito web ha motivato la ricerca di algoritmi di diffing più raffinati che operino al meglio su tale tipologia di documenti. Svariate soluzioni di successo sono state discusse; algoritmi ad alte prestazioni capaci di individuare differenze più sottili della mera aggiunta o rimozione di testo quali il movimento di interi nodi, il loro riordinamento finanche il loro incapsulamento e così via. Tuttavia tali algoritmi mancano di versatilità. L'incapsulamento di un nodo potrebbe essere considerata una differenza troppo (o troppo poco) generale o granulare in taluni contesti. Nella realtà quotidiana ogni settore, pubblico o commerciale, interessato a rilevare differenze tra documenti ha interesse nell'individuarne sempre e soltanto un sottoinsieme molto specifico. Si pensi al parlamento italiano interessato all'analisi comparativa di documenti legislativi piuttosto che ad un ospedale interessato alla diagnostica relativa alla storia clinica di un paziente. Il presente elaborato di tesi dimostra come sia possibile sviluppare un algoritmo in grado di rilevare le differenze tra due documenti semi-strutturati (in termini del più breve numero di modifiche necessarie per trasformare l'uno nell'altro) che sia parametrizzato relativamente alle funzioni di trasformazione operanti su tali documenti. Vengono discusse le definizioni essenziali ed i principali risultati alla base della teoria delle differenze e viene dimostrato come assunzioni più blande inducano la non calcolabilità dell'algoritmo di diffing in questione.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the last few years, a new generation of Business Intelligence (BI) tools called BI 2.0 has emerged to meet the new and ambitious requirements of business users. BI 2.0 not only introduces brand new topics, but in some cases it re-examines past challenges according to new perspectives depending on the market changes and needs. In this context, the term pervasive BI has gained increasing interest as an innovative and forward-looking perspective. This thesis investigates three different aspects of pervasive BI: personalization, timeliness, and integration. Personalization refers to the capacity of BI tools to customize the query result according to the user who takes advantage of it, facilitating the fruition of BI information by different type of users (e.g., front-line employees, suppliers, customers, or business partners). In this direction, the thesis proposes a model for On-Line Analytical Process (OLAP) query personalization to reduce the query result to the most relevant information for the specific user. Timeliness refers to the timely provision of business information for decision-making. In this direction, this thesis defines a new Data Warehuose (DW) methodology, Four-Wheel-Drive (4WD), that combines traditional development approaches with agile methods; the aim is to accelerate the project development and reduce the software costs, so as to decrease the number of DW project failures and favour the BI tool penetration even in small and medium companies. Integration refers to the ability of BI tools to allow users to access information anywhere it can be found, by using the device they prefer. To this end, this thesis proposes Business Intelligence Network (BIN), a peer-to-peer data warehousing architecture, where a user can formulate an OLAP query on its own system and retrieve relevant information from both its local system and the DWs of the net, preserving its autonomy and independency.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La realizzazione di stati non classici del campo elettromagnetico e in sistemi di spin è uno stimolo alla ricerca, teorica e sperimentale, da almeno trent'anni. Lo studio di atomi freddi in trappole di dipolo permette di avvicinare questo obbiettivo oltre a offrire la possibilità di effettuare esperimenti su condesati di Bose Einstein di interesse nel campo dell'interferometria atomica. La protezione della coerenza di un sistema macroscopico di spin tramite sistemi di feedback è a sua volta un obbiettivo che potrebbe portare a grandi sviluppi nel campo della metrologia e dell'informazione quantistica. Viene fornita un'introduzione a due tipologie di misura non considerate nei programmi standard di livello universitario: la misura non distruttiva (Quantum Non Demolition-QND) e la misura debole. Entrambe sono sfruttate nell'ambito dell'interazione radiazione materia a pochi fotoni o a pochi atomi (cavity QED e Atom boxes). Una trattazione delle trappole di dipolo per atomi neutri e ai comuni metodi di raffreddamento è necessaria all'introduzione all'esperimento BIARO (acronimo francese Bose Einstein condensate for Atomic Interferometry in a high finesse Optical Resonator), che si occupa di metrologia tramite l'utilizzo di condensati di Bose Einstein e di sistemi di feedback. Viene descritta la progettazione, realizzazione e caratterizzazione di un servo controller per la stabilizzazione della potenza ottica di un laser. Il dispositivo è necessario per la compensazione del ligh shift differenziale indotto da un fascio laser a 1550nm utilizzato per creare una trappola di dipolo su atomi di rubidio. La compensazione gioca un ruolo essenziale nel miglioramento di misure QND necessarie, in uno schema di feedback, per mantenere la coerenza in sistemi collettivi di spin, recentemente realizzato.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work presents exact algorithms for the Resource Allocation and Cyclic Scheduling Problems (RA&CSPs). Cyclic Scheduling Problems arise in a number of application areas, such as in hoist scheduling, mass production, compiler design (implementing scheduling loops on parallel architectures), software pipelining, and in embedded system design. The RA&CS problem concerns time and resource assignment to a set of activities, to be indefinitely repeated, subject to precedence and resource capacity constraints. In this work we present two constraint programming frameworks facing two different types of cyclic problems. In first instance, we consider the disjunctive RA&CSP, where the allocation problem considers unary resources. Instances are described through the Synchronous Data-flow (SDF) Model of Computation. The key problem of finding a maximum-throughput allocation and scheduling of Synchronous Data-Flow graphs onto a multi-core architecture is NP-hard and has been traditionally solved by means of heuristic (incomplete) algorithms. We propose an exact (complete) algorithm for the computation of a maximum-throughput mapping of applications specified as SDFG onto multi-core architectures. Results show that the approach can handle realistic instances in terms of size and complexity. Next, we tackle the Cyclic Resource-Constrained Scheduling Problem (i.e. CRCSP). We propose a Constraint Programming approach based on modular arithmetic: in particular, we introduce a modular precedence constraint and a global cumulative constraint along with their filtering algorithms. Many traditional approaches to cyclic scheduling operate by fixing the period value and then solving a linear problem in a generate-and-test fashion. Conversely, our technique is based on a non-linear model and tackles the problem as a whole: the period value is inferred from the scheduling decisions. The proposed approaches have been tested on a number of non-trivial synthetic instances and on a set of realistic industrial instances achieving good results on practical size problem.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The quest for universal memory is driving the rapid development of memories with superior all-round capabilities in non-volatility, high speed, high endurance and low power. The memory subsystem accounts for a significant cost and power budget of a computer system. Current DRAM-based main memory systems are starting to hit the power and cost limit. To resolve this issue the industry is improving existing technologies such as Flash and exploring new ones. Among those new technologies is the Phase Change Memory (PCM), which overcomes some of the shortcomings of the Flash such as durability and scalability. This alternative non-volatile memory technology, which uses resistance contrast in phase-change materials, offers more density relative to DRAM, and can help to increase main memory capacity of future systems while remaining within the cost and power constraints. Chalcogenide materials can suitably be exploited for manufacturing phase-change memory devices. Charge transport in amorphous chalcogenide-GST used for memory devices is modeled using two contributions: hopping of trapped electrons and motion of band electrons in extended states. Crystalline GST exhibits an almost Ohmic I(V) curve. In contrast amorphous GST shows a high resistance at low biases while, above a threshold voltage, a transition takes place from a highly resistive to a conductive state, characterized by a negative differential-resistance behavior. A clear and complete understanding of the threshold behavior of the amorphous phase is fundamental for exploiting such materials in the fabrication of innovative nonvolatile memories. The type of feedback that produces the snapback phenomenon is described as a filamentation in energy that is controlled by electron–electron interactions between trapped electrons and band electrons. The model thus derived is implemented within a state-of-the-art simulator. An analytical version of the model is also derived and is useful for discussing the snapback behavior and the scaling properties of the device.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The new generation of multicore processors opens new perspectives for the design of embedded systems. Multiprocessing, however, poses new challenges to the scheduling of real-time applications, in which the ever-increasing computational demands are constantly flanked by the need of meeting critical time constraints. Many research works have contributed to this field introducing new advanced scheduling algorithms. However, despite many of these works have solidly demonstrated their effectiveness, the actual support for multiprocessor real-time scheduling offered by current operating systems is still very limited. This dissertation deals with implementative aspects of real-time schedulers in modern embedded multiprocessor systems. The first contribution is represented by an open-source scheduling framework, which is capable of realizing complex multiprocessor scheduling policies, such as G-EDF, on conventional operating systems exploiting only their native scheduler from user-space. A set of experimental evaluations compare the proposed solution to other research projects that pursue the same goals by means of kernel modifications, highlighting comparable scheduling performances. The principles that underpin the operation of the framework, originally designed for symmetric multiprocessors, have been further extended first to asymmetric ones, which are subjected to major restrictions such as the lack of support for task migrations, and later to re-programmable hardware architectures (FPGAs). In the latter case, this work introduces a scheduling accelerator, which offloads most of the scheduling operations to the hardware and exhibits extremely low scheduling jitter. The realization of a portable scheduling framework presented many interesting software challenges. One of these has been represented by timekeeping. In this regard, a further contribution is represented by a novel data structure, called addressable binary heap (ABH). Such ABH, which is conceptually a pointer-based implementation of a binary heap, shows very interesting average and worst-case performances when addressing the problem of tick-less timekeeping of high-resolution timers.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Il presente elaborato ha lo scopo di analizzare lo stato attuale dell'accumulo dell'energia applicato al caso dell'energia eolica. Dopo una introduzione in cui viene presentato il perchè della necessità di garantire sistemi di accumulo dell'energia e quali sono i relativi vantaggi, viene proposto un caso studio di un impianto eolico tipico: partendo dalla potenzialità produttiva dell'impianto e dal tipo di funzione che il sistema di accumulo deve svolgere si passa alla scelta e dimensionamento del sistema più adatto.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Mainstream hardware is becoming parallel, heterogeneous, and distributed on every desk, every home and in every pocket. As a consequence, in the last years software is having an epochal turn toward concurrency, distribution, interaction which is pushed by the evolution of hardware architectures and the growing of network availability. This calls for introducing further abstraction layers on top of those provided by classical mainstream programming paradigms, to tackle more effectively the new complexities that developers have to face in everyday programming. A convergence it is recognizable in the mainstream toward the adoption of the actor paradigm as a mean to unite object-oriented programming and concurrency. Nevertheless, we argue that the actor paradigm can only be considered a good starting point to provide a more comprehensive response to such a fundamental and radical change in software development. Accordingly, the main objective of this thesis is to propose Agent-Oriented Programming (AOP) as a high-level general purpose programming paradigm, natural evolution of actors and objects, introducing a further level of human-inspired concepts for programming software systems, meant to simplify the design and programming of concurrent, distributed, reactive/interactive programs. To this end, in the dissertation first we construct the required background by studying the state-of-the-art of both actor-oriented and agent-oriented programming, and then we focus on the engineering of integrated programming technologies for developing agent-based systems in their classical application domains: artificial intelligence and distributed artificial intelligence. Then, we shift the perspective moving from the development of intelligent software systems, toward general purpose software development. Using the expertise maturated during the phase of background construction, we introduce a general-purpose programming language named simpAL, which founds its roots on general principles and practices of software development, and at the same time provides an agent-oriented level of abstraction for the engineering of general purpose software systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A control-oriented model of a Dual Clutch Transmission was developed for real-time Hardware In the Loop (HIL) applications, to support model-based development of the DCT controller. The model is an innovative attempt to reproduce the fast dynamics of the actuation system while maintaining a step size large enough for real-time applications. The model comprehends a detailed physical description of hydraulic circuit, clutches, synchronizers and gears, and simplified vehicle and internal combustion engine sub-models. As the oil circulating in the system has a large bulk modulus, the pressure dynamics are very fast, possibly causing instability in a real-time simulation; the same challenge involves the servo valves dynamics, due to the very small masses of the moving elements. Therefore, the hydraulic circuit model has been modified and simplified without losing physical validity, in order to adapt it to the real-time simulation requirements. The results of offline simulations have been compared to on-board measurements to verify the validity of the developed model, that was then implemented in a HIL system and connected to the TCU (Transmission Control Unit). Several tests have been performed: electrical failure tests on sensors and actuators, hydraulic and mechanical failure tests on hydraulic valves, clutches and synchronizers, and application tests comprehending all the main features of the control performed by the TCU. Being based on physical laws, in every condition the model simulates a plausible reaction of the system. The first intensive use of the HIL application led to the validation of the new safety strategies implemented inside the TCU software. A test automation procedure has been developed to permit the execution of a pattern of tests without the interaction of the user; fully repeatable tests can be performed for non-regression verification, allowing the testing of new software releases in fully automatic mode.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La situazione energetica mondiale, strettamente dipendente dai combustibili fossili, richiede un notevole cambio di rotta, per puntare su fonti energetiche più sostenibili dal punto di vista ambientale, ma che contemporaneamente permettano di evitare le negoziazioni in mercati competitivi come quello del petrolio. Un'efficiente soluzione per colmare i limiti legati all'utilizzo delle fonti rinnovabili è l'uso di sistemi di accumulo dell'energia; grazie a questi sistemi è possibile trasformare l'energia elettrica in altre forme di energia (meccanica, chimica, etc.) per poter essere accumulata e conservata fino al momento dell'utilizzo. Nel corso della trattazione verranno presentate le principali tecnologie per l'accumulo e/o lo stoccaggio di energia, attraverso una descrizione degli impianti, dei principali vantaggi e degli aspetti negativi legati a ciascuno di essi. In seguito saranno fissati i parametri fondamentali per l'analisi delle tecnologie al fine di stabilire, in base alle funzioni da assolvere, quali siano più adatte allo scopo.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Thrust fault-related folds in carbonate rocks are characterized by deformation accommodated by different structures, such as joints, faults, pressure solution seams, and deformation bands. Defining the development of fracture systems related to the folding process is significant both for theoretical and practical purposes. Fracture systems are useful constrains in order to understand the kinematical evolution of the fold. Furthermore, understanding the relationships between folding and fracturing provides a noteworthy contribution for reconstructing the geodynamic and the structural evolution of the studied area. Moreover, as fold-related fractures influence fluid flow through rocks, fracture systems are relevant for energy production (geothermal studies, methane and CO2 , storage and hydrocarbon exploration), environmental and social issues (pollutant distribution, aquifer characterization). The PhD project shows results of a study carried out in a multilayer carbonate anticline characterized by different mechanical properties. The aim of this study is to understand the factors which influence the fracture formation and to define their temporal sequence during the folding process. The studied are is located in the Cingoli anticline (Northern Apennines), which is characterized by a pelagic multilayer characterized by sequences with different mechanical stratigraphies. A multi-scale analysis has been made in several outcrops located in different structural positions. This project shows that the conceptual sketches proposed in literature and the strain distribution models outline well the geometrical orientation of most of the set of fractures observed in the Cingoli anticline. On the other hand, the present work suggests the relevance of the mechanical stratigraphy in particular controlling the type of fractures formed (e.g. pressure solution seams, joints or shear fractures) and their subsequent evolution. Through a multi-scale analysis, and on the basis of the temporal relationship between fracture sets and their orientation respect layering, I also suggest a conceptual model for fracture systems formation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Pervasive Sensing is a recent research trend that aims at providing widespread computing and sensing capabilities to enable the creation of smart environments that can sense, process, and act by considering input coming from both people and devices. The capabilities necessary for Pervasive Sensing are nowadays available on a plethora of devices, from embedded devices to PCs and smartphones. The wide availability of new devices and the large amount of data they can access enable a wide range of novel services in different areas, spanning from simple data collection systems to socially-aware collaborative filtering. However, the strong heterogeneity and unreliability of devices and sensors poses significant challenges. So far, existing works on Pervasive Sensing have focused only on limited portions of the whole stack of available devices and data that they can use, to propose and develop mainly vertical solutions. The push from academia and industry for this kind of services shows that time is mature for a more general support framework for Pervasive Sensing solutions able to enhance frail architectures, promote a well balanced usage of resources on different devices, and enable the widest possible access to sensed data, while ensuring a minimal energy consumption on battery-operated devices. This thesis focuses on pervasive sensing systems to extract design guidelines as foundation of a comprehensive reference model for multi-tier Pervasive Sensing applications. The validity of the proposed model is tested in five different scenarios that present peculiar and different requirements, and different hardware and sensors. The ease of mapping from the proposed logical model to the real implementations and the positive performance result campaigns prove the quality of the proposed approach and offer a reliable reference model, together with a direction for the design and deployment of future Pervasive Sensing applications.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Il presente lavoro si colloca in un ampio percorso che ha visto diversi studi interessati nella problematica del traffico urbano, e quindi delle congestioni stradali, il cui impatto nella qualità della vita nelle grandi città è diventato sempre più rilevante con il processo di urbanizzazione. Dalle prime ricerche compiute, risalenti alla prima metà del 900, e aventi come oggetto di studio la singola strada, il ricorso alla modellizzazione matematica di recente si è sviluppato in particolar modo per quel che concerne la rete urbana. Le problematiche che si incontrano affrontando il contesto delle reti urbane si possono riassumere sinteticamente innanzitutto nella mutevolezza del flusso del traffico nell'arco della giornata. In secondo luogo nell'esistenza di punti critici variabili nel corso del tempo. Incidentalmente può accadere che si verifichino eventi eccezionali dovuti tanto all'ambiente naturale, quanto sociale. Ogni modello nella sua natura riduzionista consente di prendere in esame alcune problematiche specifiche e la scelta di operare in modo selettivo risponde alla complessità del fenomeno. Con queste indicazioni di metodo si è pensato di concentrarsi sullo studio degli effetti delle fluttuazioni endogene dei flussi di traffico in una stradale di tipo Manhattan. Per modellizzare il traffico utilizzeremo un sistema dinamico, nel quale la velocità ottimale si basa sulla relazione del Diagramma Fondamentale postulato da Greenshields (1935).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Nowadays it is requested more investigations on alternative rearing systems that are able to improve poultry welfare and to warrant high-quality and safe meat products. This thesis work was focused on the evaluation of the oxidative stability of poultry meats, obtained with different rearing systems, diets (supplemented with bioactive compounds), and packaging conditions. The thesis work was divided into the following parts: - Evaluation of the effects of different rearing systems on the quality, fatty acid composition and oxidative stability of poultry thigh and breast meat belonging to different product categories (“rotisserie” and “cut-up” carcasses); - Evaluation of the effects of different rearing systems and packaging conditions on the shelf-life of poultry thigh meat stored at 4°C for 14 days, and the effects of feed supplementation with thymol (control diet and diet with 2 different concentration of thymol) and packaging conditions on lipid oxidation of poultry thigh meat shelf-life (stored at 4°C for 14 days). The oxidative stability of poultry meat was studied by means of the spectrophotometric determinations of peroxide value and thiobarbituric acid reactive substances. - Evaluation of anti-inflammatory effects of different flavonoids (thymol, luteolin, tangeretin, sulforaphane, polymethoxyflavones, curcumin derivates) to detect their biological activity in LPS-stimulated RAW 264.7 macrophage cells in vitro, in order to study more in depth their action mechanisms. It was evaluated the cell vitality (MTT assay), nitrite concentration and protein profile. The study was focused on the identification of potential dietary bioactive compounds in order to investigate their biological activity and possible synergic effects, and to develop new suitable strategies for long-term promotion of human health, in particular against cancer.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Sono indagate le implicazioni teoriche e sperimentali derivanti dall'assunzione, nella teoria della relatività speciale, di un criterio di sincronizzazione (detta assoluta) diverso da quello standard. La scelta della sincronizzazione assoluta è giustificata da alcune considerazioni di carattere epistemologico sullo status di fenomeni quali la contrazione delle lunghezze e la dilatazione del tempo. Oltre che a fornire una diversa interpretazione, la sincronizzazione assoluta rappresenta una estensione del campo di applicazione della relatività speciale in quanto può essere attuata anche in sistemi di riferimento accelerati. Questa estensione consente di trattare in maniera unitaria i fenomeni sia in sistemi di riferimento inerziali che accelerati. L'introduzione della sincronizzazione assoluta implica una modifica delle trasformazioni di Lorentz. Una caratteristica di queste nuove trasformazioni (dette inerziali) è che la trasformazione del tempo è indipendente dalle coordinate spaziali. Le trasformazioni inerziali sono ottenute nel caso generale tra due sistemi di riferimento aventi velocità (assolute) u1 e u2 comunque orientate. Viene mostrato che le trasformazioni inerziali possono formare un gruppo pur di prendere in considerazione anche riferimenti non fisicamente realizzabili perché superluminali. È analizzato il moto rigido secondo Born di un corpo esteso considerando la sincronizzazione assoluta. Sulla base delle trasformazioni inerziali si derivano le trasformazioni per i campi elettromagnetici e le equazioni di questi campi (che sostituiscono le equazioni di Maxwell). Si mostra che queste equazioni contengono soluzioni in assenza di cariche che si propagano nello spazio come onde generalmente anisotrope in accordo con quanto previsto dalle trasformazioni inerziali. L'applicazione di questa teoria elettromagnetica a sistemi accelerati mostra l'esistenza di fenomeni mai osservati che, pur non essendo in contraddizione con la relatività standard, ne forzano l'interpretazione. Viene proposto e descritto un esperimento in cui uno di questi fenomeni è misurabile.