16 resultados para Electronic mail systems - Standards

em AMS Tesi di Dottorato - Alm@DL - Università di Bologna


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The field of "computer security" is often considered something in between Art and Science. This is partly due to the lack of widely agreed and standardized methodologies to evaluate the degree of the security of a system. This dissertation intends to contribute to this area by investigating the most common security testing strategies applied nowadays and by proposing an enhanced methodology that may be effectively applied to different threat scenarios with the same degree of effectiveness. Security testing methodologies are the first step towards standardized security evaluation processes and understanding of how the security threats evolve over time. This dissertation analyzes some of the most used identifying differences and commonalities, useful to compare them and assess their quality. The dissertation then proposes a new enhanced methodology built by keeping the best of every analyzed methodology. The designed methodology is tested over different systems with very effective results, which is the main evidence that it could really be applied in practical cases. Most of the dissertation discusses and proves how the presented testing methodology could be applied to such different systems and even to evade security measures by inverting goals and scopes. Real cases are often hard to find in methodology' documents, in contrary this dissertation wants to show real and practical cases offering technical details about how to apply it. Electronic voting systems are the first field test considered, and Pvote and Scantegrity are the two tested electronic voting systems. The usability and effectiveness of the designed methodology for electronic voting systems is proved thanks to this field cases analysis. Furthermore reputation and anti virus engines have also be analyzed with similar results. The dissertation concludes by presenting some general guidelines to build a coordination-based approach of electronic voting systems to improve the security without decreasing the system modularity.

Relevância:

40.00% 40.00%

Publicador:

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The research activity carried out during the PhD course in Electrical Engineering belongs to the branch of electric and electronic measurements. The main subject of the present thesis is a distributed measurement system to be installed in Medium Voltage power networks, as well as the method developed to analyze data acquired by the measurement system itself and to monitor power quality. In chapter 2 the increasing interest towards power quality in electrical systems is illustrated, by reporting the international research activity inherent to the problem and the relevant standards and guidelines emitted. The aspect of the quality of voltage provided by utilities and influenced by customers in the various points of a network came out only in recent years, in particular as a consequence of the energy market liberalization. Usually, the concept of quality of the delivered energy has been associated mostly to its continuity. Hence the reliability was the main characteristic to be ensured for power systems. Nowadays, the number and duration of interruptions are the “quality indicators” commonly perceived by most customers; for this reason, a short section is dedicated also to network reliability and its regulation. In this contest it should be noted that although the measurement system developed during the research activity belongs to the field of power quality evaluation systems, the information registered in real time by its remote stations can be used to improve the system reliability too. Given the vast scenario of power quality degrading phenomena that usually can occur in distribution networks, the study has been focused on electromagnetic transients affecting line voltages. The outcome of such a study has been the design and realization of a distributed measurement system which continuously monitor the phase signals in different points of a network, detect the occurrence of transients superposed to the fundamental steady state component and register the time of occurrence of such events. The data set is finally used to locate the source of the transient disturbance propagating along the network lines. Most of the oscillatory transients affecting line voltages are due to faults occurring in any point of the distribution system and have to be seen before protection equipment intervention. An important conclusion is that the method can improve the monitored network reliability, since the knowledge of the location of a fault allows the energy manager to reduce as much as possible both the area of the network to be disconnected for protection purposes and the time spent by technical staff to recover the abnormal condition and/or the damage. The part of the thesis presenting the results of such a study and activity is structured as follows: chapter 3 deals with the propagation of electromagnetic transients in power systems by defining characteristics and causes of the phenomena and briefly reporting the theory and approaches used to study transients propagation. Then the state of the art concerning methods to detect and locate faults in distribution networks is presented. Finally the attention is paid on the particular technique adopted for the same purpose during the thesis, and the methods developed on the basis of such approach. Chapter 4 reports the configuration of the distribution networks on which the fault location method has been applied by means of simulations as well as the results obtained case by case. In this way the performance featured by the location procedure firstly in ideal then in realistic operating conditions are tested. In chapter 5 the measurement system designed to implement the transients detection and fault location method is presented. The hardware belonging to the measurement chain of every acquisition channel in remote stations is described. Then, the global measurement system is characterized by considering the non ideal aspects of each device that can concur to the final combined uncertainty on the estimated position of the fault in the network under test. Finally, such parameter is computed according to the Guide to the Expression of Uncertainty in Measurements, by means of a numeric procedure. In the last chapter a device is described that has been designed and realized during the PhD activity aiming at substituting the commercial capacitive voltage divider belonging to the conditioning block of the measurement chain. Such a study has been carried out aiming at providing an alternative to the used transducer that could feature equivalent performance and lower cost. In this way, the economical impact of the investment associated to the whole measurement system would be significantly reduced, making the method application much more feasible.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Traditional software engineering approaches and metaphors fall short when applied to areas of growing relevance such as electronic commerce, enterprise resource planning, and mobile computing: such areas, in fact, generally call for open architectures that may evolve dynamically over time so as to accommodate new components and meet new requirements. This is probably one of the main reasons that the agent metaphor and the agent-oriented paradigm are gaining momentum in these areas. This thesis deals with the engineering of complex software systems in terms of the agent paradigm. This paradigm is based on the notions of agent and systems of interacting agents as fundamental abstractions for designing, developing and managing at runtime typically distributed software systems. However, today the engineer often works with technologies that do not support the abstractions used in the design of the systems. For this reason the research on methodologies becomes the basic point in the scientific activity. Currently most agent-oriented methodologies are supported by small teams of academic researchers, and as a result, most of them are in an early stage and still in the first context of mostly \academic" approaches for agent-oriented systems development. Moreover, such methodologies are not well documented and very often defined and presented only by focusing on specific aspects of the methodology. The role played by meta- models becomes fundamental for comparing and evaluating the methodologies. In fact a meta-model specifies the concepts, rules and relationships used to define methodologies. Although it is possible to describe a methodology without an explicit meta-model, formalising the underpinning ideas of the methodology in question is valuable when checking its consistency or planning extensions or modifications. A good meta-model must address all the different aspects of a methodology, i.e. the process to be followed, the work products to be generated and those responsible for making all this happen. In turn, specifying the work products that must be developed implies dening the basic modelling building blocks from which they are built. As a building block, the agent abstraction alone is not enough to fully model all the aspects related to multi-agent systems in a natural way. In particular, different perspectives exist on the role that environment plays within agent systems: however, it is clear at least that all non-agent elements of a multi-agent system are typically considered to be part of the multi-agent system environment. The key role of environment as a first-class abstraction in the engineering of multi-agent system is today generally acknowledged in the multi-agent system community, so environment should be explicitly accounted for in the engineering of multi-agent system, working as a new design dimension for agent-oriented methodologies. At least two main ingredients shape the environment: environment abstractions - entities of the environment encapsulating some functions -, and topology abstractions - entities of environment that represent the (either logical or physical) spatial structure. In addition, the engineering of non-trivial multi-agent systems requires principles and mechanisms for supporting the management of the system representation complexity. These principles lead to the adoption of a multi-layered description, which could be used by designers to provide different levels of abstraction over multi-agent systems. The research in these fields has lead to the formulation of a new version of the SODA methodology where environment abstractions and layering principles are exploited for en- gineering multi-agent systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Synchronization is a key issue in any communication system, but it becomes fundamental in the navigation systems, which are entirely based on the estimation of the time delay of the signals coming from the satellites. Thus, even if synchronization has been a well known topic for many years, the introduction of new modulations and new physical layer techniques in the modern standards makes the traditional synchronization strategies completely ineffective. For this reason, the design of advanced and innovative techniques for synchronization in modern communication systems, like DVB-SH, DVB-T2, DVB-RCS, WiMAX, LTE, and in the modern navigation system, like Galileo, has been the topic of the activity. Recent years have seen the consolidation of two different trends: the introduction of Orthogonal Frequency Division Multiplexing (OFDM) in the communication systems, and of the Binary Offset Carrier (BOC) modulation in the modern Global Navigation Satellite Systems (GNSS). Thus, a particular attention has been given to the investigation of the synchronization algorithms in these areas.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recently in most of the industrial automation process an ever increasing degree of automation has been observed. This increasing is motivated by the higher requirement of systems with great performance in terms of quality of products/services generated, productivity, efficiency and low costs in the design, realization and maintenance. This trend in the growth of complex automation systems is rapidly spreading over automated manufacturing systems (AMS), where the integration of the mechanical and electronic technology, typical of the Mechatronics, is merging with other technologies such as Informatics and the communication networks. An AMS is a very complex system that can be thought constituted by a set of flexible working stations, one or more transportation systems. To understand how this machine are important in our society let considerate that every day most of us use bottles of water or soda, buy product in box like food or cigarets and so on. Another important consideration from its complexity derive from the fact that the the consortium of machine producers has estimated around 350 types of manufacturing machine. A large number of manufacturing machine industry are presented in Italy and notably packaging machine industry,in particular a great concentration of this kind of industry is located in Bologna area; for this reason the Bologna area is called “packaging valley”. Usually, the various parts of the AMS interact among them in a concurrent and asynchronous way, and coordinate the parts of the machine to obtain a desiderated overall behaviour is an hard task. Often, this is the case in large scale systems, organized in a modular and distributed manner. Even if the success of a modern AMS from a functional and behavioural point of view is still to attribute to the design choices operated in the definition of the mechanical structure and electrical electronic architecture, the system that governs the control of the plant is becoming crucial, because of the large number of duties associated to it. Apart from the activity inherent to the automation of themachine cycles, the supervisory system is called to perform other main functions such as: emulating the behaviour of traditional mechanical members thus allowing a drastic constructive simplification of the machine and a crucial functional flexibility; dynamically adapting the control strategies according to the different productive needs and to the different operational scenarios; obtaining a high quality of the final product through the verification of the correctness of the processing; addressing the operator devoted to themachine to promptly and carefully take the actions devoted to establish or restore the optimal operating conditions; managing in real time information on diagnostics, as a support of the maintenance operations of the machine. The kind of facilities that designers can directly find on themarket, in terms of software component libraries provides in fact an adequate support as regard the implementation of either top-level or bottom-level functionalities, typically pertaining to the domains of user-friendly HMIs, closed-loop regulation and motion control, fieldbus-based interconnection of remote smart devices. What is still lacking is a reference framework comprising a comprehensive set of highly reusable logic control components that, focussing on the cross-cutting functionalities characterizing the automation domain, may help the designers in the process of modelling and structuring their applications according to the specific needs. Historically, the design and verification process for complex automated industrial systems is performed in empirical way, without a clear distinction between functional and technological-implementation concepts and without a systematic method to organically deal with the complete system. Traditionally, in the field of analog and digital control design and verification through formal and simulation tools have been adopted since a long time ago, at least for multivariable and/or nonlinear controllers for complex time-driven dynamics as in the fields of vehicles, aircrafts, robots, electric drives and complex power electronics equipments. Moving to the field of logic control, typical for industrial manufacturing automation, the design and verification process is approached in a completely different way, usually very “unstructured”. No clear distinction between functions and implementations, between functional architectures and technological architectures and platforms is considered. Probably this difference is due to the different “dynamical framework”of logic control with respect to analog/digital control. As a matter of facts, in logic control discrete-events dynamics replace time-driven dynamics; hence most of the formal and mathematical tools of analog/digital control cannot be directly migrated to logic control to enlighten the distinction between functions and implementations. In addition, in the common view of application technicians, logic control design is strictly connected to the adopted implementation technology (relays in the past, software nowadays), leading again to a deep confusion among functional view and technological view. In Industrial automation software engineering, concepts as modularity, encapsulation, composability and reusability are strongly emphasized and profitably realized in the so-calledobject-oriented methodologies. Industrial automation is receiving lately this approach, as testified by some IEC standards IEC 611313, IEC 61499 which have been considered in commercial products only recently. On the other hand, in the scientific and technical literature many contributions have been already proposed to establish a suitable modelling framework for industrial automation. During last years it was possible to note a considerable growth in the exploitation of innovative concepts and technologies from ICT world in industrial automation systems. For what concerns the logic control design, Model Based Design (MBD) is being imported in industrial automation from software engineering field. Another key-point in industrial automated systems is the growth of requirements in terms of availability, reliability and safety for technological systems. In other words, the control system should not only deal with the nominal behaviour, but should also deal with other important duties, such as diagnosis and faults isolations, recovery and safety management. Indeed, together with high performance, in complex systems fault occurrences increase. This is a consequence of the fact that, as it typically occurs in reliable mechatronic systems, in complex systems such as AMS, together with reliable mechanical elements, an increasing number of electronic devices are also present, that are more vulnerable by their own nature. The diagnosis problem and the faults isolation in a generic dynamical system consists in the design of an elaboration unit that, appropriately processing the inputs and outputs of the dynamical system, is also capable of detecting incipient faults on the plant devices, reconfiguring the control system so as to guarantee satisfactory performance. The designer should be able to formally verify the product, certifying that, in its final implementation, it will perform itsrequired function guarantying the desired level of reliability and safety; the next step is that of preventing faults and eventually reconfiguring the control system so that faults are tolerated. On this topic an important improvement to formal verification of logic control, fault diagnosis and fault tolerant control results derive from Discrete Event Systems theory. The aimof this work is to define a design pattern and a control architecture to help the designer of control logic in industrial automated systems. The work starts with a brief discussion on main characteristics and description of industrial automated systems on Chapter 1. In Chapter 2 a survey on the state of the software engineering paradigm applied to industrial automation is discussed. Chapter 3 presentes a architecture for industrial automated systems based on the new concept of Generalized Actuator showing its benefits, while in Chapter 4 this architecture is refined using a novel entity, the Generalized Device in order to have a better reusability and modularity of the control logic. In Chapter 5 a new approach will be present based on Discrete Event Systems for the problemof software formal verification and an active fault tolerant control architecture using online diagnostic. Finally conclusive remarks and some ideas on new directions to explore are given. In Appendix A are briefly reported some concepts and results about Discrete Event Systems which should help the reader in understanding some crucial points in chapter 5; while in Appendix B an overview on the experimental testbed of the Laboratory of Automation of University of Bologna, is reported to validated the approach presented in chapter 3, chapter 4 and chapter 5. In Appendix C some components model used in chapter 5 for formal verification are reported.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

My research PhD work is focused on the Electrochemically Generated Luminescence (ECL) investigation of several different homogeneous and heterogeneous systems. ECL is a redox induced emission, a process whereby species, generated at electrodes, undergo a high-energy electron transfer reaction to form excited states that emit light. Since its first application, the ECL technique has become a very powerful analytical tool and has widely been used in biosensor transduction. ECL presents an intrinsically low noise and high sensitivity; moreover, the electrochemical generation of the excited state prevents scattering of the light source: for all these characteristics, it is an elective technique for ultrasensitive immunoassay detection. The majority of ECL systems involve species in solution where the emission occurs in the diffusion layer near to the electrode surface. However, over the past few years, an intense research has been focused on the ECL generated from species constrained on the electrode surface. The aim of my work is to study the behavior of ECL-generating molecular systems upon the progressive increase of their spatial constraints, that is, passing from isolated species in solution, to fluorophores embedded within a polymeric film and, finally, to patterned surfaces bearing “one-dimensional” emitting spots. In order to describe these trends, I use different “dimensions” to indicate the different classes of compounds. My thesis was mostly developed in the electrochemistry group of Bologna with the supervision of Prof Francesco Paolucci and Dr Massimo Marcaccio. With their help and also thanks to their long experience in the molecular and supramolecular ECL fields and in the surface investigations using scanning probe microscopy techniques, I was able to obtain the results herein described. Moreover, during my research work, I have established a new collaboration with the group of Nanobiotechnology of Prof. Robert Forster (Dublin City University) where I spent a research period. Prof. Forster has a broad experience in the biomedical field, especially he focuses his research on film surfaces biosensor based on the ECL transduction. This thesis can be divided into three sections described as follows: (i) in the fist section, homogeneous molecular and supramolecular ECL-active systems, either organic or inorganic species (i.e., corannulene, dendrimers and iridium metal complex), are described. Driving force for this kind of studies includes the search for new luminophores that display on one hand higher ECL efficiencies and on the other simple mechanisms for modulating intensity and energy of their emission in view of their effective use in bioconjugation applications. (ii) in the second section, the investigation of some heterogeneous ECL systems is reported. Redox polymers comprising inorganic luminophores were described. In such a context, a new conducting platform, based on carbon nanotubes, was developed aimed to accomplish both the binding of a biological molecule and its electronic wiring to the electrode. This is an essential step for the ECL application in the field of biosensors. (iii) in the third section, different patterns were produced on the electrode surface using a Scanning Electrochemical Microscopy. I developed a new methods for locally functionalizing an inert surface and reacting this surface with a luminescent probe. In this way, I successfully obtained a locally ECL active platform for multi-array application.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis deal with the design of advanced OFDM systems. Both waveform and receiver design have been treated. The main scope of the Thesis is to study, create, and propose, ideas and novel design solutions able to cope with the weaknesses and crucial aspects of modern OFDM systems. Starting from the the transmitter side, the problem represented by low resilience to non-linear distortion has been assessed. A novel technique that considerably reduces the Peak-to-Average Power Ratio (PAPR) yielding a quasi constant signal envelope in the time domain (PAPR close to 1 dB) has been proposed.The proposed technique, named Rotation Invariant Subcarrier Mapping (RISM),is a novel scheme for subcarriers data mapping,where the symbols belonging to the modulation alphabet are not anchored, but maintain some degrees of freedom. In other words, a bit tuple is not mapped on a single point, rather it is mapped onto a geometrical locus, which is totally or partially rotation invariant. The final positions of the transmitted complex symbols are chosen by an iterative optimization process in order to minimize the PAPR of the resulting OFDM symbol. Numerical results confirm that RISM makes OFDM usable even in severe non-linear channels. Another well known problem which has been tackled is the vulnerability to synchronization errors. Indeed in OFDM system an accurate recovery of carrier frequency and symbol timing is crucial for the proper demodulation of the received packets. In general, timing and frequency synchronization is performed in two separate phases called PRE-FFT and POST-FFT synchronization. Regarding the PRE-FFT phase, a novel joint symbol timing and carrier frequency synchronization algorithm has been presented. The proposed algorithm is characterized by a very low hardware complexity, and, at the same time, it guarantees very good performance in in both AWGN and multipath channels. Regarding the POST-FFT phase, a novel approach for both pilot structure and receiver design has been presented. In particular, a novel pilot pattern has been introduced in order to minimize the occurrence of overlaps between two pattern shifted replicas. This allows to replace conventional pilots with nulls in the frequency domain, introducing the so called Silent Pilots. As a result, the optimal receiver turns out to be very robust against severe Rayleigh fading multipath and characterized by low complexity. Performance of this approach has been analytically and numerically evaluated. Comparing the proposed approach with state of the art alternatives, in both AWGN and multipath fading channels, considerable performance improvements have been obtained. The crucial problem of channel estimation has been thoroughly investigated, with particular emphasis on the decimation of the Channel Impulse Response (CIR) through the selection of the Most Significant Samples (MSSs). In this contest our contribution is twofold, from the theoretical side, we derived lower bounds on the estimation mean-square error (MSE) performance for any MSS selection strategy,from the receiver design we proposed novel MSS selection strategies which have been shown to approach these MSE lower bounds, and outperformed the state-of-the-art alternatives. Finally, the possibility of using of Single Carrier Frequency Division Multiple Access (SC-FDMA) in the Broadband Satellite Return Channel has been assessed. Notably, SC-FDMA is able to improve the physical layer spectral efficiency with respect to single carrier systems, which have been used so far in the Return Channel Satellite (RCS) standards. However, it requires a strict synchronization and it is also sensitive to phase noise of local radio frequency oscillators. For this reason, an effective pilot tone arrangement within the SC-FDMA frame, and a novel Joint Multi-User (JMU) estimation method for the SC-FDMA, has been proposed. As shown by numerical results, the proposed scheme manages to satisfy strict synchronization requirements and to guarantee a proper demodulation of the received signal.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Negli ultimi anni, parallelamente all’espansione del settore biologico, si è assistito a un crescente interesse per i modelli alternativi di garanzia dell’integrità e della genuinità dei prodotti biologici. Gruppi di piccoli agricoltori di tutto il mondo hanno iniziato a sviluppare approcci alternativi per affrontare i problemi connessi alla certificazione di terza parte. Queste pratiche sono note come Sistemi di Garanzia Partecipativa (PGS). Tali modelli: (i) si basano sugli standard di certificazione biologica dell’IFOAM, (ii) riguardano il complesso dei produttori di una comunità rurale, (iii) comportano l’inclusione di una grande varietà di attori e (iv) hanno lo scopo di ridurre al minimo burocrazia e costi semplificando le procedure di verifica e incorporando un elemento di educazione ambientale e sociale sia per i produttori sia per i consumatori. Gli obiettivi di questo lavoro di ricerca: • descrivere il funzionamento dei sistemi di garanzia partecipativa; • indicare i vantaggi della loro adozione nei Paesi in via di sviluppo e non; • illustrare il caso della Rede Ecovida de Agroecologia (Brasile); • offrire uno spunto di riflessione che riguarda il consumatore e la relativa fiducia nel modello PGS. L’impianto teorico fa riferimento alla Teoria delle Convenzioni. Sulla base del quadro teorico è stato costruito un questionario per i consumatori con lo scopo di testare l’appropriatezza delle ipotesi teoriche. I risultati finali riguardano la stima del livello di conoscenza attuale, la fiducia e la volontà d’acquisto dei prodotti PGS da parte dei consumatori nelle aree considerate. Sulla base di questa ricerca sarà possibile adattare ed esportare il modello empirico in altri paesi che presentano economie diverse per cercare di comprendere il potenziale campo di applicazione dei sistemi di garanzia partecipativa.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Power electronic converters are extensively adopted for the solution of timely issues, such as power quality improvement in industrial plants, energy management in hybrid electrical systems, and control of electrical generators for renewables. Beside nonlinearity, this systems are typically characterized by hard constraints on the control inputs, and sometimes the state variables. In this respect, control laws able to handle input saturation are crucial to formally characterize the systems stability and performance properties. From a practical viewpoint, a proper saturation management allows to extend the systems transient and steady-state operating ranges, improving their reliability and availability. The main topic of this thesis concern saturated control methodologies, based on modern approaches, applied to power electronics and electromechanical systems. The pursued objective is to provide formal results under any saturation scenario, overcoming the drawbacks of the classic solution commonly applied to cope with saturation of power converters, and enhancing performance. For this purpose two main approaches are exploited and extended to deal with power electronic applications: modern anti-windup strategies, providing formal results and systematic design rules for the anti-windup compensator, devoted to handle control saturation, and “one step” saturated feedback design techniques, relying on a suitable characterization of the saturation nonlinearity and less conservative extensions of standard absolute stability theory results. The first part of the thesis is devoted to present and develop a novel general anti-windup scheme, which is then specifically applied to a class of power converters adopted for power quality enhancement in industrial plants. In the second part a polytopic differential inclusion representation of saturation nonlinearity is presented and extended to deal with a class of multiple input power converters, used to manage hybrid electrical energy sources. The third part regards adaptive observers design for robust estimation of the parameters required for high performance control of power systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The promising development in the routine nanofabrication and the increasing knowledge of the working principles of new classes of highly sensitive, label-free and possibly cost-effective bio-nanosensors for the detection of molecules in liquid environment, has rapidly increased the possibility to develop portable sensor devices that could have a great impact on many application fields, such as health-care, environment and food production, thanks to the intrinsic ability of these biosensors to detect, monitor and study events at the nanoscale. Moreover, there is a growing demand for low-cost, compact readout structures able to perform accurate preliminary tests on biosensors and/or to perform routine tests with respect to experimental conditions avoiding skilled personnel and bulky laboratory instruments. This thesis focuses on analysing, designing and testing novel implementation of bio-nanosensors in layered hybrid systems where microfluidic devices and microelectronic systems are fused in compact printed circuit board (PCB) technology. In particular the manuscript presents hybrid systems in two validating cases using nanopore and nanowire technology, demonstrating new features not covered by state of the art technologies and based on the use of two custom integrated circuits (ICs). As far as the nanopores interface system is concerned, an automatic setup has been developed for the concurrent formation of bilayer lipid membranes combined with a custom parallel readout electronic system creating a complete portable platform for nanopores or ion channels studies. On the other hand, referring to the nanowire readout hybrid interface, two systems enabling to perform parallel, real-time, complex impedance measurements based on lock-in technique, as well as impedance spectroscopy measurements have been developed. This feature enable to experimentally investigate the possibility to enrich informations on the bio-nanosensors concurrently acquiring impedance magnitude and phase thus investigating capacitive contributions of bioanalytical interactions on biosensor surface.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Agri-food supply chains extend beyond national boundaries, partially facilitated by a policy environment that encourages more liberal international trade. Rising concentration within the downstream sector has driven a shift towards “buyer-driven” global value chains (GVCs) extending internationally with global sourcing and the emergence of multinational key economic players that compete with increase emphasis on product quality attributes. Agri-food systems are thus increasingly governed by a range of inter-related public and private standards, both of which are becoming a priori mandatory, especially in supply chains for high-value and quality-differentiated agri-food products and tend to strongly affect upstream agricultural practices, firms’ internal organization and strategic behaviour and to shape the food chain organization. Notably, increasing attention has been given to the impact of SPS measures on agri-food trade and notably on developing countries’ export performance. Food and agricultural trade is the vital link in the mutual dependency of the global trade system and developing countries. Hence, developing countries derive a substantial portion of their income from food and agricultural trade. In Morocco, fruit and vegetable (especially fresh) are the primary agricultural export. Because of the labor intensity, this sector (especially citrus and tomato) is particularly important in terms of income and employment generation, especially for the female laborers hired in the farms and packing houses. Hence, the emergence of agricultural and agrifood product safety issues and the subsequent tightening of market requirements have challenged mutual gains due to the lack of technical and financial capacities of most developing countries.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the increasing importance that nanotechnologies have in everyday life, it is not difficult to realize that also a single molecule, if properly designed, can be a device able to perform useful functions: such a chemical species is called chemosensor, that is a molecule of abiotic origin that signals the presence of matter or energy. Signal transduction is the mechanism by which an interaction of a sensor with an analyte yields a measurable form of energy. When dealing with the design of a chemosensor, we need to take into account a “communication requirement” between its three component: the receptor unit, responsible for the selective analyte binding, the spacer, which controls the geometry of the system and modulates the electronic interaction between the receptor and the signalling unit, whose physico-chemical properties change upon complexation. A luminescent chemosensor communicates a variation of the physico-chemical properties of the receptor unit with a luminescence output signal. This thesis work consists in the characterization of new molecular and nanoparticle-based system which can be used as sensitive materials for the construction of new optical transduction devices able to provide information about the concentration of analytes in solution. In particular two direction were taken. The first is to continue in the development of new chemosensors, that is the first step for the construction of reliable and efficient devices, and in particular the work will be focused on chemosensors for metal ions for biomedical and environmental applications. The second is to study more efficient and complex organized systems, such as derivatized silica nanoparticles. These system can potentially have higher sensitivity than molecular systems, and present many advantages, like the possibility to be ratiometric, higher Stokes shifts and lower signal-to-noise ratio.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Despite the several issues faced in the past, the evolutionary trend of silicon has kept its constant pace. Today an ever increasing number of cores is integrated onto the same die. Unfortunately, the extraordinary performance achievable by the many-core paradigm is limited by several factors. Memory bandwidth limitation, combined with inefficient synchronization mechanisms, can severely overcome the potential computation capabilities. Moreover, the huge HW/SW design space requires accurate and flexible tools to perform architectural explorations and validation of design choices. In this thesis we focus on the aforementioned aspects: a flexible and accurate Virtual Platform has been developed, targeting a reference many-core architecture. Such tool has been used to perform architectural explorations, focusing on instruction caching architecture and hybrid HW/SW synchronization mechanism. Beside architectural implications, another issue of embedded systems is considered: energy efficiency. Near Threshold Computing is a key research area in the Ultra-Low-Power domain, as it promises a tenfold improvement in energy efficiency compared to super-threshold operation and it mitigates thermal bottlenecks. The physical implications of modern deep sub-micron technology are severely limiting performance and reliability of modern designs. Reliability becomes a major obstacle when operating in NTC, especially memory operation becomes unreliable and can compromise system correctness. In the present work a novel hybrid memory architecture is devised to overcome reliability issues and at the same time improve energy efficiency by means of aggressive voltage scaling when allowed by workload requirements. Variability is another great drawback of near-threshold operation. The greatly increased sensitivity to threshold voltage variations in today a major concern for electronic devices. We introduce a variation-tolerant extension of the baseline many-core architecture. By means of micro-architectural knobs and a lightweight runtime control unit, the baseline architecture becomes dynamically tolerant to variations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Principale obiettivo della ricerca è quello di ricostruire lo stato dell’arte in materia di sanità elettronica e Fascicolo Sanitario Elettronico, con una precipua attenzione ai temi della protezione dei dati personali e dell’interoperabilità. A tal fine sono stati esaminati i documenti, vincolanti e non, dell’Unione europea nonché selezionati progetti europei e nazionali (come “Smart Open Services for European Patients” (EU); “Elektronische Gesundheitsakte” (Austria); “MedCom” (Danimarca); “Infrastruttura tecnologica del Fascicolo Sanitario Elettronico”, “OpenInFSE: Realizzazione di un’infrastruttura operativa a supporto dell’interoperabilità delle soluzioni territoriali di fascicolo sanitario elettronico nel contesto del sistema pubblico di connettività”, “Evoluzione e interoperabilità tecnologica del Fascicolo Sanitario Elettronico”, “IPSE - Sperimentazione di un sistema per l’interoperabilità europea e nazionale delle soluzioni di Fascicolo Sanitario Elettronico: componenti Patient Summary e ePrescription” (Italia)). Le analisi giuridiche e tecniche mostrano il bisogno urgente di definire modelli che incoraggino l’utilizzo di dati sanitari ed implementino strategie effettive per l’utilizzo con finalità secondarie di dati sanitari digitali , come Open Data e Linked Open Data. L’armonizzazione giuridica e tecnologica è vista come aspetto strategico per ridurre i conflitti in materia di protezione di dati personali esistenti nei Paesi membri nonché la mancanza di interoperabilità tra i sistemi informativi europei sui Fascicoli Sanitari Elettronici. A questo scopo sono state individuate tre linee guida: (1) armonizzazione normativa, (2) armonizzazione delle regole, (3) armonizzazione del design dei sistemi informativi. I principi della Privacy by Design (“prottivi” e “win-win”), così come gli standard del Semantic Web, sono considerate chiavi risolutive per il suddetto cambiamento.