24 resultados para Model-driven Architecture, Goal-Oriented design, usability
em AMS Tesi di Dottorato - Alm@DL - Universit
Resumo:
The discovery of new materials and their functions has always been a fundamental component of technological progress. Nowadays, the quest for new materials is stronger than ever: sustainability, medicine, robotics and electronics are all key assets which depend on the ability to create specifically tailored materials. However, designing materials with desired properties is a difficult task, and the complexity of the discipline makes it difficult to identify general criteria. While scientists developed a set of best practices (often based on experience and expertise), this is still a trial-and-error process. This becomes even more complex when dealing with advanced functional materials. Their properties depend on structural and morphological features, which in turn depend on fabrication procedures and environment, and subtle alterations leads to dramatically different results. Because of this, materials modeling and design is one of the most prolific research fields. Many techniques and instruments are continuously developed to enable new possibilities, both in the experimental and computational realms. Scientists strive to enforce cutting-edge technologies in order to make progress. However, the field is strongly affected by unorganized file management, proliferation of custom data formats and storage procedures, both in experimental and computational research. Results are difficult to find, interpret and re-use, and a huge amount of time is spent interpreting and re-organizing data. This also strongly limit the application of data-driven and machine learning techniques. This work introduces possible solutions to the problems described above. Specifically, it talks about developing features for specific classes of advanced materials and use them to train machine learning models and accelerate computational predictions for molecular compounds; developing method for organizing non homogeneous materials data; automate the process of using devices simulations to train machine learning models; dealing with scattered experimental data and use them to discover new patterns.
Resumo:
Biomedical analyses are becoming increasingly complex, with respect to both the type of the data to be produced and the procedures to be executed. This trend is expected to continue in the future. The development of information and protocol management systems that can sustain this challenge is therefore becoming an essential enabling factor for all actors in the field. The use of custom-built solutions that require the biology domain expert to acquire or procure software engineering expertise in the development of the laboratory infrastructure is not fully satisfactory because it incurs undesirable mutual knowledge dependencies between the two camps. We propose instead an infrastructure concept that enables the domain experts to express laboratory protocols using proper domain knowledge, free from the incidence and mediation of the software implementation artefacts. In the system that we propose this is made possible by basing the modelling language on an authoritative domain specific ontology and then using modern model-driven architecture technology to transform the user models in software artefacts ready for execution in a multi-agent based execution platform specialized for biomedical laboratories.
Resumo:
Two of the main features of today complex software systems like pervasive computing systems and Internet-based applications are distribution and openness. Distribution revolves around three orthogonal dimensions: (i) distribution of control|systems are characterised by several independent computational entities and devices, each representing an autonomous and proactive locus of control; (ii) spatial distribution|entities and devices are physically distributed and connected in a global (such as the Internet) or local network; and (iii) temporal distribution|interacting system components come and go over time, and are not required to be available for interaction at the same time. Openness deals with the heterogeneity and dynamism of system components: complex computational systems are open to the integration of diverse components, heterogeneous in terms of architecture and technology, and are dynamic since they allow components to be updated, added, or removed while the system is running. The engineering of open and distributed computational systems mandates for the adoption of a software infrastructure whose underlying model and technology could provide the required level of uncoupling among system components. This is the main motivation behind current research trends in the area of coordination middleware to exploit tuple-based coordination models in the engineering of complex software systems, since they intrinsically provide coordinated components with communication uncoupling and further details in the references therein. An additional daunting challenge for tuple-based models comes from knowledge-intensive application scenarios, namely, scenarios where most of the activities are based on knowledge in some form|and where knowledge becomes the prominent means by which systems get coordinated. Handling knowledge in tuple-based systems induces problems in terms of syntax - e.g., two tuples containing the same data may not match due to differences in the tuple structure - and (mostly) of semantics|e.g., two tuples representing the same information may not match based on a dierent syntax adopted. Till now, the problem has been faced by exploiting tuple-based coordination within a middleware for knowledge intensive environments: e.g., experiments with tuple-based coordination within a Semantic Web middleware (surveys analogous approaches). However, they appear to be designed to tackle the design of coordination for specic application contexts like Semantic Web and Semantic Web Services, and they result in a rather involved extension of the tuple space model. The main goal of this thesis was to conceive a more general approach to semantic coordination. In particular, it was developed the model and technology of semantic tuple centres. It is adopted the tuple centre model as main coordination abstraction to manage system interactions. A tuple centre can be seen as a programmable tuple space, i.e. an extension of a Linda tuple space, where the behaviour of the tuple space can be programmed so as to react to interaction events. By encapsulating coordination laws within coordination media, tuple centres promote coordination uncoupling among coordinated components. Then, the tuple centre model was semantically enriched: a main design choice in this work was to try not to completely redesign the existing syntactic tuple space model, but rather provide a smooth extension that { although supporting semantic reasoning { keep the simplicity of tuple and tuple matching as easier as possible. By encapsulating the semantic representation of the domain of discourse within coordination media, semantic tuple centres promote semantic uncoupling among coordinated components. The main contributions of the thesis are: (i) the design of the semantic tuple centre model; (ii) the implementation and evaluation of the model based on an existent coordination infrastructure; (iii) a view of the application scenarios in which semantic tuple centres seem to be suitable as coordination media.
Resumo:
The research solved the historiographic lacuna about Leonardo Ricci’s work in the United States focusing on the span 1952-1972 as a fundamental period for the architect's research, which moved from the project for the community space to macrostructures. The considered period is comprised between Ricci’s first travel to the United States and the date of his resignation from the University of Florida, one year before his resignation from the deanship of the faculty of architecture of Florence (1973). The research retraced philologically the stages of Ricci’s activity in the U.S.A. unveiling the premises and results of his American transfer, and to what extent it marked a turning period for his work as educator and designer and for the wider historiographic contest of the Sixties. The American transfer helped him grounding his belief in avoiding a priori morphological results in favor of what he called the “form-act” design method. Ricci’s research in the U.S.A. is described in his books Anonymous (XX century) and City of the Earth (unpublished). In them and in Ricci’s projects one common thread is traceable: the application of the “form-act” as the best tool to conceive urban design, a discipline established in the United States during Ricci’s first stay at M.I.T., in which he encountered the balance point between architecture and urban planning, between the architect’s sign and his being anonymous, between the collective and the individual dimension. With the notions of “anonymous architecture” and “form-act”, Urban Design and “open work” are the key words to understand Ricci’s work in the United States and in Italy. Urban design’s main goal to design the city as a collective work of art was the solution of that dychothomous research that enlivened Ricci’s work and one possible answer to that tension useful for him to seek the truth of architecture.
Resumo:
Self-organisation is increasingly being regarded as an effective approach to tackle modern systems complexity. The self-organisation approach allows the development of systems exhibiting complex dynamics and adapting to environmental perturbations without requiring a complete knowledge of the future surrounding conditions. However, the development of self-organising systems (SOS) is driven by different principles with respect to traditional software engineering. For instance, engineers typically design systems combining smaller elements where the composition rules depend on the reference paradigm, but typically produce predictable results. Conversely, SOS display non-linear dynamics, which can hardly be captured by deterministic models, and, although robust with respect to external perturbations, are quite sensitive to changes on inner working parameters. In this thesis, we describe methodological aspects concerning the early-design stage of SOS built relying on the Multiagent paradigm: in particular, we refer to the A&A metamodel, where MAS are composed by agents and artefacts, i.e. environmental resources. Then, we describe an architectural pattern that has been extracted from a recurrent solution in designing self-organising systems: this pattern is based on a MAS environment formed by artefacts, modelling non-proactive resources, and environmental agents acting on artefacts so as to enable self-organising mechanisms. In this context, we propose a scientific approach for the early design stage of the engineering of self-organising systems: the process is an iterative one and each cycle is articulated in four stages, modelling, simulation, formal verification, and tuning. During the modelling phase we mainly rely on the existence of a self-organising strategy observed in Nature and, hopefully encoded as a design pattern. Simulations of an abstract system model are used to drive design choices until the required quality properties are obtained, thus providing guarantees that the subsequent design steps would lead to a correct implementation. However, system analysis exclusively based on simulation results does not provide sound guarantees for the engineering of complex systems: to this purpose, we envision the application of formal verification techniques, specifically model checking, in order to exactly characterise the system behaviours. During the tuning stage parameters are tweaked in order to meet the target global dynamics and feasibility constraints. In order to evaluate the methodology, we analysed several systems: in this thesis, we only describe three of them, i.e. the most representative ones for each of the three years of PhD course. We analyse each case study using the presented method, and describe the exploited formal tools and techniques.
Resumo:
This thesis is concerned with the role played by software tools in the analysis and dissemination of linguistic corpora and their contribution to a more widespread adoption of corpora in different fields. Chapter 1 contains an overview of some of the most relevant corpus analysis tools available today, presenting their most interesting features and some of their drawbacks. Chapter 2 begins with an explanation of the reasons why none of the available tools appear to satisfy the requirements of the user community and then continues with technical overview of the current status of the new system developed as part of this work. This presentation is followed by highlights of features that make the system appealing to users and corpus builders (i.e. scholars willing to make their corpora available to the public). The chapter concludes with an indication of future directions for the projects and information on the current availability of the software. Chapter 3 describes the design of an experiment devised to evaluate the usability of the new system in comparison to another corpus tool. Usage of the tool was tested in the context of a documentation task performed on a real assignment during a translation class in a master's degree course. In chapter 4 the findings of the experiment are presented on two levels of analysis: firstly a discussion on how participants interacted with and evaluated the two corpus tools in terms of interface and interaction design, usability and perceived ease of use. Then an analysis follows of how users interacted with corpora to complete the task and what kind of queries they submitted. Finally, some general conclusions are drawn and areas for future work are outlined.
Resumo:
A two-dimensional model to analyze the distribution of magnetic fields in the airgap of a PM electrical machines is studied. A numerical algorithm for non-linear magnetic analysis of multiphase surface-mounted PM machines with semi-closed slots is developed, based on the equivalent magnetic circuit method. By using a modular structure geometry, whose the basic element can be duplicated, it allows to design whatever typology of windings distribution. In comparison to a FEA, permits a reduction in computing time and to directly changing the values of the parameters in a user interface, without re-designing the model. Output torque and radial forces acting on the moving part of the machine can be calculated. In addition, an analytical model for radial forces calculation in multiphase bearingless Surface-Mounted Permanent Magnet Synchronous Motors (SPMSM) is presented. It allows to predict amplitude and direction of the force, depending on the values of torque current, of levitation current and of rotor position. It is based on the space vectors method, letting the analysis of the machine also during transients. The calculations are conducted by developing the analytical functions in Fourier series, taking all the possible interactions between stator and rotor mmf harmonic components into account and allowing to analyze the effects of electrical and geometrical quantities of the machine, being parametrized. The model is implemented in the design of a control system for bearingless machines, as an accurate electromagnetic model integrated in a three-dimensional mechanical model, where one end of the motor shaft is constrained to simulate the presence of a mechanical bearing, while the other is free, only supported by the radial forces developed in the interactions between magnetic fields, to realize a bearingless system with three degrees of freedom. The complete model represents the design of the experimental system to be realized in the laboratory.
Resumo:
Il presente studio si colloca nell’ambito di una ricerca il cui obiettivo è la formulazione di criteri progettuali finalizzati alla ottimizzazione delle prestazioni energetiche delle cantine di aziende vitivinicole con dimensioni produttive medio-piccole. Nello specifico la ricerca si pone l’obiettivo di individuare degli indicatori che possano valutare l’influenza che le principali variabili progettuali hanno sul fabbisogno energetico dell’edificio e sull’andamento delle temperature all’interno dei locali di conservazione ed invecchiamento del vino. Tali indicatori forniscono informazioni sulla prestazione energetica dell’edificio e sull’idoneità dei locali non climatizzati finalizzata alla conservazione del vino Essendo la progettazione una complessa attività multidisciplinare, la ricerca ha previsto l’ideazione di un programma di calcolo in grado di gestire ed elaborare dati provenienti da diversi ambiti (ingegneristici, architettonici, delle produzioni agroindustriali, ecc.), e di restituire risultati sintetici attraverso indicatori allo scopo individuati. Il programma è stato applicato su un caso-studio aziendale rappresentativo del settore produttivo. Sono stati vagliati gli effetti di due modalità di vendemmia e di quattro soluzioni architettoniche differenti. Le soluzioni edilizie derivano dalla combinazione di diversi isolamenti termici e dalla presenza o meno di locali interrati. Per le analisi sul caso-studio ci si è avvalsi di simulazioni energetiche in regime dinamico, supportate e validate da campagne di monitoraggio termico e meteorologico all’interno dell’azienda oggetto di studio. I risultati ottenuti hanno evidenziato come il programma di calcolo concepito nell’ambito di questo studio individui le criticità dell’edificio in termini energetici e di “benessere termico” del vino e consenta una iterativa revisione delle variabili progettuale indagate. Esso quindi risulta essere uno strumento informatizzato di valutazione a supporto della progettazione, finalizzato ad una ottimizzazione del processo progettuale in grado di coniugare, in maniera integrata, gli obiettivi della qualità del prodotto, della efficienza produttiva e della sostenibilità economica ed ambientale.
Resumo:
A general description of the work presented in this thesis can be divided into three areas of interest: micropore fabrication, nanopore modification, and their applications. The first part of the thesis is related to the novel, reliable, cost-effective, potable, mass-productive, robust, and ease of use micropore flowcell that works based on the RPS technique. Based on our first goal, which was finding an alternate materials and processes that would shorten production times while lowering costs and improving signal quality, the polyimide film was used as a substrate to create precise pores by femtosecond laser, and the resulting current blockades of different sizes of the nanoparticles were recorded. Based on the results, the device can detecting nano-sized particles by changing the current level. The experimental and theoretical investigation, scanning electron microscopy, and focus ion beam were performed to explain the micropore's performance. The second goal was design and fabrication of a leak-free, easy-to-assemble, and portable polymethyl methacrylate flowcell for nanopore experiments. Here, ion current rectification was studied in our nanodevice. We showed a self-assembly-based, controllable, and monitorable in situ Poly(l-lysine)- g-poly(ethylene glycol) coating method under voltage-driven electrolyte flow and electrostatic interaction between nanopore walls and PLL backbones. Using designed nanopore flowcell and in situ monolayer PLL-g-PEG functionalized 20±4 nm SiN nanopores, we observed non-sticky α-1 anti-trypsin protein translocation. additionally, we could show the enhancement of translocation events through this non-sticky nanopore, and also, estimate the volume of the translocated protein. In this study, by comparing the AAT protein translocation results from functionalized and non-functionalized nanopore we demonstrated the 105 times dwell time reduction (31-0.59ms), 25% amplitude enhancement (0.24-0.3 nA), and 15 times event’s number increase (1-15events/s) after functionalization in 1×PBS at physiological pH. Also, the AAT protein volume was measured, close to the calculated AAT protein hydrodynamic volume and previous reports.
Resumo:
The thesis deals with channel coding theory applied to upper layers in the protocol stack of a communication link and it is the outcome of four year research activity. A specific aspect of this activity has been the continuous interaction between the natural curiosity related to the academic blue-sky research and the system oriented design deriving from the collaboration with European industry in the framework of European funded research projects. In this dissertation, the classical channel coding techniques, that are traditionally applied at physical layer, find their application at upper layers where the encoding units (symbols) are packets of bits and not just single bits, thus explaining why such upper layer coding techniques are usually referred to as packet layer coding. The rationale behind the adoption of packet layer techniques is in that physical layer channel coding is a suitable countermeasure to cope with small-scale fading, while it is less efficient against large-scale fading. This is mainly due to the limitation of the time diversity inherent in the necessity of adopting a physical layer interleaver of a reasonable size so as to avoid increasing the modem complexity and the latency of all services. Packet layer techniques, thanks to the longer codeword duration (each codeword is composed of several packets of bits), have an intrinsic longer protection against long fading events. Furthermore, being they are implemented at upper layer, Packet layer techniques have the indisputable advantages of simpler implementations (very close to software implementation) and of a selective applicability to different services, thus enabling a better matching with the service requirements (e.g. latency constraints). Packet coding technique improvement has been largely recognized in the recent communication standards as a viable and efficient coding solution: Digital Video Broadcasting standards, like DVB-H, DVB-SH, and DVB-RCS mobile, and 3GPP standards (MBMS) employ packet coding techniques working at layers higher than the physical one. In this framework, the aim of the research work has been the study of the state-of-the-art coding techniques working at upper layer, the performance evaluation of these techniques in realistic propagation scenario, and the design of new coding schemes for upper layer applications. After a review of the most important packet layer codes, i.e. Reed Solomon, LDPC and Fountain codes, in the thesis focus our attention on the performance evaluation of ideal codes (i.e. Maximum Distance Separable codes) working at UL. In particular, we analyze the performance of UL-FEC techniques in Land Mobile Satellite channels. We derive an analytical framework which is a useful tool for system design allowing to foresee the performance of the upper layer decoder. We also analyze a system in which upper layer and physical layer codes work together, and we derive the optimal splitting of redundancy when a frequency non-selective slowly varying fading channel is taken into account. The whole analysis is supported and validated through computer simulation. In the last part of the dissertation, we propose LDPC Convolutional Codes (LDPCCC) as possible coding scheme for future UL-FEC application. Since one of the main drawbacks related to the adoption of packet layer codes is the large decoding latency, we introduce a latency-constrained decoder for LDPCCC (called windowed erasure decoder). We analyze the performance of the state-of-the-art LDPCCC when our decoder is adopted. Finally, we propose a design rule which allows to trade-off performance and latency.
Resumo:
Cost, performance and availability considerations are forcing even the most conservative high-integrity embedded real-time systems industry to migrate from simple hardware processors to ones equipped with caches and other acceleration features. This migration disrupts the practices and solutions that industry had developed and consolidated over the years to perform timing analysis. Industry that are confident with the efficiency/effectiveness of their verification and validation processes for old-generation processors, do not have sufficient insight on the effects of the migration to cache-equipped processors. Caches are perceived as an additional source of complexity, which has potential for shattering the guarantees of cost- and schedule-constrained qualification of their systems. The current industrial approach to timing analysis is ill-equipped to cope with the variability incurred by caches. Conversely, the application of advanced WCET analysis techniques on real-world industrial software, developed without analysability in mind, is hardly feasible. We propose a development approach aimed at minimising the cache jitters, as well as at enabling the application of advanced WCET analysis techniques to industrial systems. Our approach builds on:(i) identification of those software constructs that may impede or complicate timing analysis in industrial-scale systems; (ii) elaboration of practical means, under the model-driven engineering (MDE) paradigm, to enforce the automated generation of software that is analyzable by construction; (iii) implementation of a layout optimisation method to remove cache jitters stemming from the software layout in memory, with the intent of facilitating incremental software development, which is of high strategic interest to industry. The integration of those constituents in a structured approach to timing analysis achieves two interesting properties: the resulting software is analysable from the earliest releases onwards - as opposed to becoming so only when the system is final - and more easily amenable to advanced timing analysis by construction, regardless of the system scale and complexity.
Resumo:
During the last decade peach and nectarine fruit have lost considerable market share, due to increased consumer dissatisfaction with quality at retail markets. This is mainly due to harvesting of too immature fruit and high ripening heterogeneity. The main problem is that the traditional used maturity indexes are not able to objectively detect fruit maturity stage, neither the variability present in the field, leading to a difficult post-harvest management of the product and to high fruit losses. To assess more precisely the fruit ripening other techniques and devices can be used. Recently, a new non-destructive maturity index, based on the vis-NIR technology, the Index of Absorbance Difference (IAD), that correlates with fruit degreening and ethylene production, was introduced and the IAD was used to study peach and nectarine fruit ripening from the “field to the fork”. In order to choose the best techniques to improve fruit quality, a detailed description of the tree structure, of fruit distribution and ripening evolution on the tree was faced. More in details, an architectural model (PlantToon®) was used to design the tree structure and the IAD was applied to characterize the maturity stage of each fruit. Their combined use provided an objective and precise evaluation of the fruit ripening variability, related to different training systems, crop load, fruit exposure and internal temperature. Based on simple field assessment of fruit maturity (as IAD) and growth, a model for an early prediction of harvest date and yield, was developed and validated. The relationship between the non-destructive maturity IAD, and the fruit shelf-life, was also confirmed. Finally the obtained results were validated by consumer test: the fruit sorted in different maturity classes obtained a different consumer acceptance. The improved knowledge, leaded to an innovative management of peach and nectarine fruit, from “field to market”.
Resumo:
One important metaphor, referred to biological theories, used to investigate on organizational and business strategy issues is the metaphor about heredity; an area requiring further investigation is the extent to which the characteristics of blueprints inherited from the parent, helps in explaining subsequent development of the spawned ventures. In order to shed a light on the tension between inherited patterns and the new trajectory that may characterize spawned ventures’ development we propose a model aimed at investigating which blueprints elements might exert an effect on business model design choices and to which extent their persistence (or abandonment) determines subsequent business model innovation. Under the assumption that academic and corporate institutions transmit different genes to their spin-offs, we hence expect to have heterogeneity in elements that affect business model design choices and its subsequent evolution. This is the reason why we carry on a twofold analysis in the biotech (meta)industry: under a multiple-case research design, business model and especially its fundamental design elements and themes scholars individuated to decompose the construct, have been thoroughly analysed. Our purpose is to isolate the dimensions of business model that may have been the object of legacy and the ones along which an experimentation and learning process is more likely to happen, bearing in mind that differences between academic and corporate might not be that evident as expected, especially considering that business model innovation may occur.
Resumo:
La neuroriabilitazione è un processo attraverso cui individui affetti da patologie neurologiche mirano al conseguimento di un recupero completo o alla realizzazione del loro potenziale ottimale benessere fisico, mentale e sociale. Elementi essenziali per una riabilitazione efficace sono: una valutazione clinica da parte di un team multidisciplinare, un programma riabilitativo mirato e la valutazione dei risultati conseguiti mediante misure scientifiche e clinicamente appropriate. Obiettivo principale di questa tesi è stato sviluppare metodi e strumenti quantitativi per il trattamento e la valutazione motoria di pazienti neurologici. I trattamenti riabilitativi convenzionali richiedono a pazienti neurologici l’esecuzione di esercizi ripetitivi, diminuendo la loro motivazione. La realtà virtuale e i feedback sono in grado di coinvolgerli nel trattamento, permettendo ripetibilità e standardizzazione dei protocolli. È stato sviluppato e valutato uno strumento basato su feedback aumentati per il controllo del tronco. Inoltre, la realtà virtuale permette l’individualizzare il trattamento in base alle esigenze del paziente. Un’applicazione virtuale per la riabilitazione del cammino è stata sviluppata e testata durante un training su pazienti di sclerosi multipla, valutandone fattibilità e accettazione e dimostrando l'efficacia del trattamento. La valutazione quantitativa delle capacità motorie dei pazienti viene effettuata utilizzando sistemi di motion capture. Essendo il loro uso nella pratica clinica limitato, una metodologia per valutare l’oscillazione delle braccia in soggetti parkinsoniani basata su sensori inerziali è stata proposta. Questi sono piccoli, accurati e flessibili ma accumulano errori durante lunghe misurazioni. È stato affrontato questo problema e i risultati suggeriscono che, se il sensore è sul piede e le accelerazioni sono integrate iniziando dalla fase di mid stance, l’errore e le sue conseguenze nella determinazione dei parametri spaziali sono contenuti. Infine, è stata presentata una validazione del Kinect per il tracking del cammino in ambiente virtuale. Risultati preliminari consentono di definire il campo di utilizzo del sensore in riabilitazione.
Resumo:
La recente Direttiva 31/2010 dell’Unione Europea impone agli stati membri di riorganizzare il quadro legislativo nazionale in materia di prestazione energetica degli edifici, affinchè tutte le nuove costruzioni presentino dal 1° gennaio 2021 un bilancio energetico tendente allo zero; termine peraltro anticipato al 1° gennaio 2019 per gli edifici pubblici. La concezione di edifici a energia “quasi” zero (nZEB) parte dal presupposto di un involucro energeticamente di standard passivo per arrivare a compensare, attraverso la produzione preferibilmente in sito di energia da fonti rinnovabili, gli esigui consumi richiesti su base annuale. In quest’ottica la riconsiderazione delle potenzialità dell’architettura solare individua degli strumenti concreti e delle valide metodologie per supportare la progettazione di involucri sempre più performanti che sfruttino pienamente una risorsa inesauribile, diffusa e alla portata di tutti come quella solare. Tutto ciò in considerazione anche della non più procrastinabile necessità di ridurre il carico energetico imputabile agli edifici, responsabili come noto di oltre il 40% dei consumi mondiali e del 24% delle emissioni di gas climalteranti. Secondo queste premesse la ricerca pone come centrale il tema dell’integrazione dei sistemi di guadagno termico, cosiddetti passivi, e di produzione energetica, cosiddetti attivi, da fonte solare nell’involucro architettonico. Il percorso sia analitico che operativo effettuato si è posto la finalità di fornire degli strumenti metodologici e pratici al progetto dell’architettura, bisognoso di un nuovo approccio integrato mirato al raggiungimento degli obiettivi di risparmio energetico. Attraverso una ricognizione generale del concetto di architettura solare e dei presupposti teorici e terminologici che stanno alla base della stessa, la ricerca ha prefigurato tre tipologie di esito finale: una codificazione delle morfologie ricorrenti nelle realizzazioni solari, un’analisi comparata del rendimento solare nelle principali aggregazioni tipologiche edilizie e una parte importante di verifica progettuale dove sono stati applicati gli assunti delle categorie precedenti