14 resultados para Verification and validation technology
em AMS Tesi di Dottorato - Alm@DL - Università di Bologna
Resumo:
Nuclear cross sections are the pillars onto which the transport simulation of particles and radiations is built on. Since the nuclear data libraries production chain is extremely complex and made of different steps, it is mandatory to foresee stringent verification and validation procedures to be applied to it. The work here presented has been focused on the development of a new python based software called JADE, whose objective is to give a significant help in increasing the level of automation and standardization of these procedures in order to reduce the time passing between new libraries releases and, at the same time, increasing their quality. After an introduction to nuclear fusion (which is the field where the majority of the V\&V action was concentrated for the time being) and to the simulation of particles and radiations transport, the motivations leading to JADE development are discussed. Subsequently, the code general architecture and the implemented benchmarks (both experimental and computational) are described. After that, the results coming from the major application of JADE during the research years are presented. At last, after a final discussion on the objective reached by JADE, the possible brief, mid and long time developments for the project are discussed.
Resumo:
This manuscript reports the overall development of a Ph.D. research project during the “Mechanics and advanced engineering sciences” course at the Department of Industrial Engineering of the University of Bologna. The project is focused on the development of a combustion control system for an innovative Spark Ignited engine layout. In details, the controller is oriented to manage a prototypal engine equipped with a Port Water Injection system. The water injection technology allows an increment of combustion efficiency due to the knock mitigation effect that permits to keep the combustion phasing closer to the optimal position with respect to the traditional layout. At the beginning of the project, the effects and the possible benefits achievable by water injection have been investigated by a focused experimental campaign. Then the data obtained by combustion analysis have been processed to design a control-oriented combustion model. The model identifies the correlation between Spark Advance, combustion phasing and injected water mass, and two different strategies are presented, both based on an analytic and semi-empirical approach and therefore compatible with a real-time application. The model has been implemented in a combustion controller that manages water injection to reach the best achievable combustion efficiency while keeping knock levels under a pre-established threshold. Three different versions of the algorithm are described in detail. This controller has been designed and pre-calibrated in a software-in-the-loop environment and later an experimental validation has been performed with a rapid control prototyping approach to highlight the performance of the system on real set-up. To further make the strategy implementable on an onboard application, an estimation algorithm of combustion phasing, necessary for the controller, has been developed during the last phase of the PhD Course, based on accelerometric signals.
Resumo:
This doctoral thesis focuses on ground-based measurements of stratospheric nitric acid (HNO3)concentrations obtained by means of the Ground-Based Millimeter-wave Spectrometer (GBMS). Pressure broadened HNO3 emission spectra are analyzed using a new inversion algorithm developed as part of this thesis work and the retrieved vertical profiles are extensively compared to satellite-based data. This comparison effort I carried out has a key role in establishing a long-term (1991-2010), global data record of stratospheric HNO3, with an expected impact on studies concerning ozone decline and recovery. The first part of this work is focused on the development of an ad hoc version of the Optimal Estimation Method (Rodgers, 2000) in order to retrieve HNO3 spectra observed by means of GBMS. I also performed a comparison between HNO3 vertical profiles retrieved with the OEM and those obtained with the old iterative Matrix Inversion method. Results show no significant differences in retrieved profiles and error estimates, with the OEM providing however additional information needed to better characterize the retrievals. A final section of this first part of the work is dedicated to a brief review on the application of the OEM to other trace gases observed by GBMS, namely O3 and N2O. The second part of this study deals with the validation of HNO3 profiles obtained with the new inversion method. The first step has been the validation of GBMS measurements of tropospheric opacity, which is a necessary tool in the calibration of any GBMS spectra. This was achieved by means of comparisons among correlative measurements of water vapor column content (or Precipitable Water Vapor, PWV) since, in the spectral region observed by GBMS, the tropospheric opacity is almost entirely due to water vapor absorption. In particular, I compared GBMS PWV measurements collected during the primary field campaign of the ECOWAR project (Bhawar et al., 2008) with simultaneous PWV observations obtained with Vaisala RS92k radiosondes, a Raman lidar, and an IR Fourier transform spectrometer. I found that GBMS PWV measurements are in good agreement with the other three data sets exhibiting a mean difference between observations of ~9%. After this initial validation, GBMS HNO3 retrievals have been compared to two sets of satellite data produced by the two NASA/JPL Microwave Limb Sounder (MLS) experiments (aboard the Upper Atmosphere Research Satellite (UARS) from 1991 to 1999, and on the Earth Observing System (EOS) Aura mission from 2004 to date). This part of my thesis is inserted in GOZCARDS (Global Ozone Chemistry and Related Trace gas Data Records for the Stratosphere), a multi-year project, aimed at developing a long-term data record of stratospheric constituents relevant to the issues of ozone decline and expected recovery. This data record will be based mainly on satellite-derived measurements but ground-based observations will be pivotal for assessing offsets between satellite data sets. Since the GBMS has been operated for more than 15 years, its nitric acid data record offers a unique opportunity for cross-calibrating HNO3 measurements from the two MLS experiments. I compare GBMS HNO3 measurements obtained from the Italian Alpine station of Testa Grigia (45.9° N, 7.7° E, elev. 3500 m), during the period February 2004 - March 2007, and from Thule Air Base, Greenland (76.5°N 68.8°W), during polar winter 2008/09, and Aura MLS observations. A similar intercomparison is made between UARS MLS HNO3 measurements with those carried out from the GBMS at South Pole, Antarctica (90°S), during the most part of 1993 and 1995. I assess systematic differences between GBMS and both UARS and Aura HNO3 data sets at seven potential temperature levels. Results show that, except for measurements carried out at Thule, ground based and satellite data sets are consistent within the errors, at all potential temperature levels.
Resumo:
Chapter 1 studies how consumers’ switching costs affect the pricing and profits of firms competing in two-sided markets such as Apple and Google in the smartphone market. When two-sided markets are dynamic – rather than merely static – I show that switching costs lower the first-period price if network externalities are strong, which is in contrast to what has been found in one-sided markets. By contrast, switching costs soften price competition in the initial period if network externalities are weak and consumers are more patient than the platforms. Moreover, an increase in switching costs on one side decreases the first-period price on the other side. Chapter 2 examines firms’ incentives to invest in local and flexible resources when demand is uncertain and correlated. I find that market power of the monopolist providing flexible resources distorts investment incentives, while competition mitigates them. The extent of improvement depends critically on demand correlation and the cost of capacity: under social optimum and monopoly, if the flexible resource is cheap, the relationship between investment and correlation is positive, and if it is costly, the relationship becomes negative; under duopoly, the relationship is positive. The analysis also sheds light on some policy discussions in markets such as cloud computing. Chapter 3 develops a theory of sequential investments in cybersecurity. The regulator can use safety standards and liability rules to increase security. I show that the joint use of an optimal standard and a full liability rule leads to underinvestment ex ante and overinvestment ex post. Instead, switching to a partial liability rule can correct the inefficiencies. This suggests that to improve security, the regulator should encourage not only firms, but also consumers to invest in security.
Resumo:
Heavy Liquid Metal Cooled Reactors are among the concepts, fostered by the GIF, as potentially able to comply with stringent safety, economical, sustainability, proliferation resistance and physical protection requirements. The increasing interest around these innovative systems has highlighted the lack of tools specifically dedicated to their core design stage. The present PhD thesis summarizes the three years effort of, partially, closing the mentioned gap, by rationally defining the role of codes in core design and by creating a development methodology for core design-oriented codes (DOCs) and its subsequent application to the most needed design areas. The covered fields are, in particular, the fuel assembly thermal-hydraulics and the fuel pin thermo-mechanics. Regarding the former, following the established methodology, the sub-channel code ANTEO+ has been conceived. Initially restricted to the forced convection regime and subsequently extended to the mixed one, ANTEO+, via a thorough validation campaign, has been demonstrated a reliable tool for design applications. Concerning the fuel pin thermo-mechanics, the will to include safety-related considerations at the outset of the pin dimensioning process, has given birth to the safety-informed DOC TEMIDE. The proposed DOC development methodology has also been applied to TEMIDE; given the complex interdependence patterns among the numerous phenomena involved in an irradiated fuel pin, to optimize the code final structure, a sensitivity analysis has been performed, in the anticipated application domain. The development methodology has also been tested in the verification and validation phases; the latter, due to the low availability of experiments truly representative of TEMIDE's application domain, has only been a preliminary attempt to test TEMIDE's capabilities in fulfilling the DOC requirements upon which it has been built. In general, the capability of the proposed development methodology for DOCs in delivering tools helping the core designer in preliminary setting the system configuration has been proven.
Resumo:
The porpoise of this study was to implement research methodologies and assess the effectiveness and impact of management tools to promote best practices for the long term conservation of the endangered African wild dog (Lycaon pictus). Different methods were included in the project framework to investigate and expand the applicability of these methodologies to free-ranging African wild dogs in the southern African region: ethology, behavioural endocrinology and ecology field methodologies were tested and implemented. Additionally, research was performed to test the effectiveness and implication of a contraceptive implant (Suprenolin) as a management tool for the species of a subpopulation hosted in fenced areas. Attention was especially given to social structure and survival of treated packs. This research provides useful tools and advances the applicability of these methods for field studies, standardizing and improving research instruments in the field of conservation biology and behavioural endocrinology. Results reported here provide effective methodologies to expand the applicability of non-invasive endocrine assessment to previously prohibited fields, and validation of sampling methods for faecal hormone analysis. The final aim was to fill a knowledge gap on behaviours of the species and provide a common ground for future researchers to apply non-invasive methods to this species research and to test the effectiveness of the contraception on a managed metapopulation.
Resumo:
Background Echocardiography is the cornerstone in the evaluation of cardiac masses and provides accurate characterization. Despite, its accuracy in diagnosis of cardiac masses (CM) remains challenging and, up to date, no validated diagnostic algorithm is validated. Purpose The aim of our study was to evaluate the diagnostic accuracy of echocardiography, to identify the echocardiographic predictors of malignancy and to develop and then validate a multiparametric echocardiographic score that could be used to estimate the likelihood of the histological nature of a CM. Materials and methods The final sample consisted of 273 consecutive patients who had a 2D-echocardiographic evaluation and a histologic diagnosis. Logistic regression was performed to evaluate the ability of echocardiographic findings to discriminate benign versus malignant masses, then a scoring system was developed and validated in a separate test cohort. Results Of the 322 patients initially included in the Bologna Cardiac Masses Registry, 13 with a poor acoustic window, 27 with no histological examination patients and 9 extra-cardiac masses were excluded. In the remaining 273 patients, classical 2-D echocardiogram identified 249 masses with a diagnostic accuracy of 88%. A weighted score [Diagnostic Echocardiographic Mass (DEM) Score] ranging from 0 to 9 was obtained from 6 variables: infiltration, polylobate mass, moderate-severe pericardial effusion. The AUC for the score was 0.965 (95% CI [0.938-0.993]). In a logistic regression analysis using the DEM score as a predictor, the likelihood of malignant CM increased more than 4 times for a 1-unit increase in the score (OR=4.468; 95% CI 2.733-7.304). A score < 3 denoted a high probability of a benign diagnosis, and a score ≥ 5 points corresponded to a higher risk of malignancy. Conclusion 2D-Echocardiography provides a high diagnostic accuracy in identifying cardiac masses and our multiparametric echocardiographic score could be useful to predict the histological nature of cardiac masses.
Resumo:
Although there is broad agreement on the need to transition to a fairer agro-food system, consumer potential in shaping a fair food system has often been overlooked. There is no unique definition of the concept of fairness from the consumer’s perspective. In addition, there are no scales in the academic literature that address fairness in its broad sense, as the existing scales focus on specific and limited aspects that provide a partial picture of the concept. Lack of a true and trustworthy measurement of the notion has been a significant barrier to the knowledge of fairness in agro-food systems from the individual-differences perspective. The individual-differences perspective helps explain why some individuals are more likely than others to put emphasis on the extent to which agro-food chains are fair. Individual consumer perception of an ethical problem is followed by the perception of various alternatives that might lead to a solution. Therefore, the current research intends to make two significant contributions by resolving these constraints. First, advance the literature by providing a new viewpoint to understand fairness in the agro-food chain. Indeed, the research provides a comprehensive conceptualisation of fairness that embraces different aspects of fairness and describes the concept in all its facets and nuances. Second, the research provides a valid, reliable, and invariant measurement of the individual disposition toward fairness in agro-food chains by rooting the items in the theoretical underpinnings of the fairness literature. Overall, this research provides a comprehensive suite of approaches and tools to enhance the resilience, integrity and sustainability of agro-food chains.
Resumo:
Cost, performance and availability considerations are forcing even the most conservative high-integrity embedded real-time systems industry to migrate from simple hardware processors to ones equipped with caches and other acceleration features. This migration disrupts the practices and solutions that industry had developed and consolidated over the years to perform timing analysis. Industry that are confident with the efficiency/effectiveness of their verification and validation processes for old-generation processors, do not have sufficient insight on the effects of the migration to cache-equipped processors. Caches are perceived as an additional source of complexity, which has potential for shattering the guarantees of cost- and schedule-constrained qualification of their systems. The current industrial approach to timing analysis is ill-equipped to cope with the variability incurred by caches. Conversely, the application of advanced WCET analysis techniques on real-world industrial software, developed without analysability in mind, is hardly feasible. We propose a development approach aimed at minimising the cache jitters, as well as at enabling the application of advanced WCET analysis techniques to industrial systems. Our approach builds on:(i) identification of those software constructs that may impede or complicate timing analysis in industrial-scale systems; (ii) elaboration of practical means, under the model-driven engineering (MDE) paradigm, to enforce the automated generation of software that is analyzable by construction; (iii) implementation of a layout optimisation method to remove cache jitters stemming from the software layout in memory, with the intent of facilitating incremental software development, which is of high strategic interest to industry. The integration of those constituents in a structured approach to timing analysis achieves two interesting properties: the resulting software is analysable from the earliest releases onwards - as opposed to becoming so only when the system is final - and more easily amenable to advanced timing analysis by construction, regardless of the system scale and complexity.
Resumo:
Ancient pavements are composed of a variety of preparatory or foundation layers constituting the substrate, and of a layer of tesserae, pebbles or marble slabs forming the surface of the floor. In other cases, the surface consists of a mortar layer beaten and polished. The term mosaic is associated with the presence of tesserae or pebbles, while the more general term pavement is used in all the cases. As past and modern excavations of ancient pavements demonstrated, all pavements do not necessarily display the stratigraphy of the substrate described in the ancient literary sources. In fact, the number and thickness of the preparatory layers, as well as the nature and the properties of their constituent materials, are often varying in pavements which are placed either in different sites or in different buildings within a same site or even in a same building. For such a reason, an investigation that takes account of the whole structure of the pavement is important when studying the archaeological context of the site where it is placed, when designing materials to be used for its maintenance and restoration, when documenting it and when presenting it to public. Five case studies represented by archaeological sites containing floor mosaics and other kind of pavements, dated to the Hellenistic and the Roman period, have been investigated by means of in situ and laboratory analyses. The results indicated that the characteristics of the studied pavements, namely the number and the thickness of the preparatory layers, and the properties of the mortars constituting them, vary according to the ancient use of the room where the pavements are placed and to the type of surface upon which they were built. The study contributed to the understanding of the function and the technology of the pavementsâ substrate and to the characterization of its constituent materials. Furthermore, the research underlined the importance of the investigation of the whole structure of the pavement, included the foundation surface, in the interpretation of the archaeological context where it is located. A series of practical applications of the results of the research, in the designing of repair mortars for pavements, in the documentation of ancient pavements in the conservation practice, and in the presentation to public in situ and in museums of ancient pavements, have been suggested.
Resumo:
Recently in most of the industrial automation process an ever increasing degree of automation has been observed. This increasing is motivated by the higher requirement of systems with great performance in terms of quality of products/services generated, productivity, efficiency and low costs in the design, realization and maintenance. This trend in the growth of complex automation systems is rapidly spreading over automated manufacturing systems (AMS), where the integration of the mechanical and electronic technology, typical of the Mechatronics, is merging with other technologies such as Informatics and the communication networks. An AMS is a very complex system that can be thought constituted by a set of flexible working stations, one or more transportation systems. To understand how this machine are important in our society let considerate that every day most of us use bottles of water or soda, buy product in box like food or cigarets and so on. Another important consideration from its complexity derive from the fact that the the consortium of machine producers has estimated around 350 types of manufacturing machine. A large number of manufacturing machine industry are presented in Italy and notably packaging machine industry,in particular a great concentration of this kind of industry is located in Bologna area; for this reason the Bologna area is called “packaging valley”. Usually, the various parts of the AMS interact among them in a concurrent and asynchronous way, and coordinate the parts of the machine to obtain a desiderated overall behaviour is an hard task. Often, this is the case in large scale systems, organized in a modular and distributed manner. Even if the success of a modern AMS from a functional and behavioural point of view is still to attribute to the design choices operated in the definition of the mechanical structure and electrical electronic architecture, the system that governs the control of the plant is becoming crucial, because of the large number of duties associated to it. Apart from the activity inherent to the automation of themachine cycles, the supervisory system is called to perform other main functions such as: emulating the behaviour of traditional mechanical members thus allowing a drastic constructive simplification of the machine and a crucial functional flexibility; dynamically adapting the control strategies according to the different productive needs and to the different operational scenarios; obtaining a high quality of the final product through the verification of the correctness of the processing; addressing the operator devoted to themachine to promptly and carefully take the actions devoted to establish or restore the optimal operating conditions; managing in real time information on diagnostics, as a support of the maintenance operations of the machine. The kind of facilities that designers can directly find on themarket, in terms of software component libraries provides in fact an adequate support as regard the implementation of either top-level or bottom-level functionalities, typically pertaining to the domains of user-friendly HMIs, closed-loop regulation and motion control, fieldbus-based interconnection of remote smart devices. What is still lacking is a reference framework comprising a comprehensive set of highly reusable logic control components that, focussing on the cross-cutting functionalities characterizing the automation domain, may help the designers in the process of modelling and structuring their applications according to the specific needs. Historically, the design and verification process for complex automated industrial systems is performed in empirical way, without a clear distinction between functional and technological-implementation concepts and without a systematic method to organically deal with the complete system. Traditionally, in the field of analog and digital control design and verification through formal and simulation tools have been adopted since a long time ago, at least for multivariable and/or nonlinear controllers for complex time-driven dynamics as in the fields of vehicles, aircrafts, robots, electric drives and complex power electronics equipments. Moving to the field of logic control, typical for industrial manufacturing automation, the design and verification process is approached in a completely different way, usually very “unstructured”. No clear distinction between functions and implementations, between functional architectures and technological architectures and platforms is considered. Probably this difference is due to the different “dynamical framework”of logic control with respect to analog/digital control. As a matter of facts, in logic control discrete-events dynamics replace time-driven dynamics; hence most of the formal and mathematical tools of analog/digital control cannot be directly migrated to logic control to enlighten the distinction between functions and implementations. In addition, in the common view of application technicians, logic control design is strictly connected to the adopted implementation technology (relays in the past, software nowadays), leading again to a deep confusion among functional view and technological view. In Industrial automation software engineering, concepts as modularity, encapsulation, composability and reusability are strongly emphasized and profitably realized in the so-calledobject-oriented methodologies. Industrial automation is receiving lately this approach, as testified by some IEC standards IEC 611313, IEC 61499 which have been considered in commercial products only recently. On the other hand, in the scientific and technical literature many contributions have been already proposed to establish a suitable modelling framework for industrial automation. During last years it was possible to note a considerable growth in the exploitation of innovative concepts and technologies from ICT world in industrial automation systems. For what concerns the logic control design, Model Based Design (MBD) is being imported in industrial automation from software engineering field. Another key-point in industrial automated systems is the growth of requirements in terms of availability, reliability and safety for technological systems. In other words, the control system should not only deal with the nominal behaviour, but should also deal with other important duties, such as diagnosis and faults isolations, recovery and safety management. Indeed, together with high performance, in complex systems fault occurrences increase. This is a consequence of the fact that, as it typically occurs in reliable mechatronic systems, in complex systems such as AMS, together with reliable mechanical elements, an increasing number of electronic devices are also present, that are more vulnerable by their own nature. The diagnosis problem and the faults isolation in a generic dynamical system consists in the design of an elaboration unit that, appropriately processing the inputs and outputs of the dynamical system, is also capable of detecting incipient faults on the plant devices, reconfiguring the control system so as to guarantee satisfactory performance. The designer should be able to formally verify the product, certifying that, in its final implementation, it will perform itsrequired function guarantying the desired level of reliability and safety; the next step is that of preventing faults and eventually reconfiguring the control system so that faults are tolerated. On this topic an important improvement to formal verification of logic control, fault diagnosis and fault tolerant control results derive from Discrete Event Systems theory. The aimof this work is to define a design pattern and a control architecture to help the designer of control logic in industrial automated systems. The work starts with a brief discussion on main characteristics and description of industrial automated systems on Chapter 1. In Chapter 2 a survey on the state of the software engineering paradigm applied to industrial automation is discussed. Chapter 3 presentes a architecture for industrial automated systems based on the new concept of Generalized Actuator showing its benefits, while in Chapter 4 this architecture is refined using a novel entity, the Generalized Device in order to have a better reusability and modularity of the control logic. In Chapter 5 a new approach will be present based on Discrete Event Systems for the problemof software formal verification and an active fault tolerant control architecture using online diagnostic. Finally conclusive remarks and some ideas on new directions to explore are given. In Appendix A are briefly reported some concepts and results about Discrete Event Systems which should help the reader in understanding some crucial points in chapter 5; while in Appendix B an overview on the experimental testbed of the Laboratory of Automation of University of Bologna, is reported to validated the approach presented in chapter 3, chapter 4 and chapter 5. In Appendix C some components model used in chapter 5 for formal verification are reported.
Resumo:
Despite the several issues faced in the past, the evolutionary trend of silicon has kept its constant pace. Today an ever increasing number of cores is integrated onto the same die. Unfortunately, the extraordinary performance achievable by the many-core paradigm is limited by several factors. Memory bandwidth limitation, combined with inefficient synchronization mechanisms, can severely overcome the potential computation capabilities. Moreover, the huge HW/SW design space requires accurate and flexible tools to perform architectural explorations and validation of design choices. In this thesis we focus on the aforementioned aspects: a flexible and accurate Virtual Platform has been developed, targeting a reference many-core architecture. Such tool has been used to perform architectural explorations, focusing on instruction caching architecture and hybrid HW/SW synchronization mechanism. Beside architectural implications, another issue of embedded systems is considered: energy efficiency. Near Threshold Computing is a key research area in the Ultra-Low-Power domain, as it promises a tenfold improvement in energy efficiency compared to super-threshold operation and it mitigates thermal bottlenecks. The physical implications of modern deep sub-micron technology are severely limiting performance and reliability of modern designs. Reliability becomes a major obstacle when operating in NTC, especially memory operation becomes unreliable and can compromise system correctness. In the present work a novel hybrid memory architecture is devised to overcome reliability issues and at the same time improve energy efficiency by means of aggressive voltage scaling when allowed by workload requirements. Variability is another great drawback of near-threshold operation. The greatly increased sensitivity to threshold voltage variations in today a major concern for electronic devices. We introduce a variation-tolerant extension of the baseline many-core architecture. By means of micro-architectural knobs and a lightweight runtime control unit, the baseline architecture becomes dynamically tolerant to variations.
Resumo:
The following thesis focused on the dry grinding process modelling and optimization for automotive gears production. A FEM model was implemented with the aim at predicting process temperatures and preventing grinding thermal defects on the material surface. In particular, the model was conceived to facilitate the choice of the grinding parameters during the design and the execution of the dry-hard finishing process developed and patented by the company Samputensili Machine Tools (EMAG Group) on automotive gears. The proposed model allows to analyse the influence of the technological parameters, comprising the grinding wheel specifications. Automotive gears finished by dry-hard finishing process are supposed to reach the same quality target of the gears finished through the conventional wet grinding process with the advantage of reducing production costs and environmental pollution. But, the grinding process allows very high values of specific pressure and heat absorbed by the material, therefore, removing the lubricant increases the risk of thermal defects occurrence. An incorrect design of the process parameters set could cause grinding burns, which affect the mechanical performance of the ground component inevitably. Therefore, a modelling phase of the process could allow to enhance the mechanical characteristics of the components and avoid waste during production. A hierarchical FEM model was implemented to predict dry grinding temperatures and was represented by the interconnection of a microscopic and a macroscopic approach. A microscopic single grain grinding model was linked to a macroscopic thermal model to predict the dry grinding process temperatures and so to forecast the thermal cycle effect caused by the process parameters and the grinding wheel specification choice. Good agreement between the model and the experiments was achieved making the dry-hard finishing an efficient and reliable technology to implement in the gears automotive industry.
Resumo:
Thanks to the development and combination of molecular markers for the genetic traceability of sunflower varieties and a gas chromatographic method for the determination of the FAs composition of sunflower oil, it was possible to implement an experimental method for the verification of both the traceability and the variety of organic sunflower marketed by Agricola Grains S.p.A. The experimental activity focused on two objectives: the implementation of molecular markers for the routine control of raw material deliveries for oil extraction and the improvement and validation of a gas chromatographic method for the determination of the FAs composition of sunflower oil. With regard to variety verification and traceability, the marker systems evaluated were the following: SSR markers (12) arranged in two multiplex sets and SCAR markers for the verification of cytoplasmic male sterility (Pet1) and fertility. In addition, two objectives were pursued in order to enable a routine application in the industrial field: the development of a suitable protocol for DNA extraction from single seeds and the implementation of a semi-automatic capillary electrophoresis system for the analysis of marker fragments. The development and validation of a new GC/FID analytical method for the determination of fatty acids (FAME) in sunflower achenes to improve the quality and efficiency of the analytical flow in the control of raw and refined materials entering the Agricola Grains S.p.A. production chain. The analytical performances being validated by the newly implemented method are: linearity of response, limit of quantification, specificity, precision, intra-laboratory precision, robustness, BIAS. These parameters are used to compare the newly developed method with the one considered as reference - Commission Regulation No. 2568/91 and Commission Implementing Regulation No. 2015/1833. Using the combination of the analytical methods mentioned above, the documentary traceability of the product can be confirmed experimentally, providing relevant information for subsequent marketing.