22 resultados para Systems evaluation


Relevância:

40.00% 40.00%

Publicador:

Resumo:

The international growing concern for the human exposure to magnetic fields generated by electric power lines has unavoidably led to imposing legal limits. Respecting these limits, implies being able to calculate easily and accurately the generated magnetic field also in complex configurations. Twisting of phase conductors is such a case. The consolidated exact and approximated theory regarding a single-circuit twisted three-phase power cable line has been reported along with the proposal of an innovative simplified formula obtained by means of an heuristic procedure. This formula, although being dramatically simpler, is proven to be a good approximation of the analytical formula and at the same time much more accurate than the approximated formula found in literature. The double-circuit twisted three-phase power cable line case has been studied following different approaches of increasing complexity and accuracy. In this framework, the effectiveness of the above-mentioned innovative formula is also examined. The experimental verification of the correctness of the twisted double-circuit theoretical analysis has permitted its extension to multiple-circuit twisted three-phase power cable lines. In addition, appropriate 2D and, in particularly, 3D numerical codes for simulating real existing overhead power lines for the calculation of the magnetic field in their vicinity have been created. Finally, an innovative ‘smart’ measurement and evaluation system of the magnetic field is being proposed, described and validated, which deals with the experimentally-based evaluation of the total magnetic field B generated by multiple sources in complex three-dimensional arrangements, carried out on the basis of the measurement of the three Cartesian field components and their correlation with the field currents via multilinear regression techniques. The ultimate goal is verifying that magnetic induction intensity is within the prescribed limits.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Traditional cell culture models have limitations in extrapolating functional mechanisms that underlie strategies of microbial virulence. Indeed during the infection the pathogens adapt to different tissue-specific environmental factors. The development of in vitro models resembling human tissue physiology might allow the replacement of inaccurate or aberrant animal models. Three-dimensional (3D) cell culture systems are more reliable and more predictive models that can be used for the meaningful dissection of host–pathogen interactions. The lung and gut mucosae often represent the first site of exposure to pathogens and provide a physical barrier against their entry. Within this context, the tracheobronchial and small intestine tract were modelled by tissue engineering approach. The main work was focused on the development and the extensive characterization of a human organotypic airway model, based on a mechanically supported co-culture of normal primary cells. The regained morphological features, the retrieved environmental factors and the presence of specific epithelial subsets resembled the native tissue organization. In addition, the respiratory model enabled the modular insertion of interesting cell types, such as innate immune cells or multipotent stromal cells, showing a functional ability to release pertinent cytokines differentially. Furthermore this model responded imitating known events occurring during the infection by Non-typeable H. influenzae. Epithelial organoid models, mimicking the small intestine tract, were used for a different explorative analysis of tissue-toxicity. Further experiments led to detection of a cell population targeted by C. difficile Toxin A and suggested a role in the impairment of the epithelial homeostasis by the bacterial virulence machinery. The described cell-centered strategy can afford critical insights in the evaluation of the host defence and pathogenic mechanisms. The application of these two models may provide an informing step that more coherently defines relevant molecular interactions happening during the infection.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The research activity carried out during the PhD course in Electrical Engineering belongs to the branch of electric and electronic measurements. The main subject of the present thesis is a distributed measurement system to be installed in Medium Voltage power networks, as well as the method developed to analyze data acquired by the measurement system itself and to monitor power quality. In chapter 2 the increasing interest towards power quality in electrical systems is illustrated, by reporting the international research activity inherent to the problem and the relevant standards and guidelines emitted. The aspect of the quality of voltage provided by utilities and influenced by customers in the various points of a network came out only in recent years, in particular as a consequence of the energy market liberalization. Usually, the concept of quality of the delivered energy has been associated mostly to its continuity. Hence the reliability was the main characteristic to be ensured for power systems. Nowadays, the number and duration of interruptions are the “quality indicators” commonly perceived by most customers; for this reason, a short section is dedicated also to network reliability and its regulation. In this contest it should be noted that although the measurement system developed during the research activity belongs to the field of power quality evaluation systems, the information registered in real time by its remote stations can be used to improve the system reliability too. Given the vast scenario of power quality degrading phenomena that usually can occur in distribution networks, the study has been focused on electromagnetic transients affecting line voltages. The outcome of such a study has been the design and realization of a distributed measurement system which continuously monitor the phase signals in different points of a network, detect the occurrence of transients superposed to the fundamental steady state component and register the time of occurrence of such events. The data set is finally used to locate the source of the transient disturbance propagating along the network lines. Most of the oscillatory transients affecting line voltages are due to faults occurring in any point of the distribution system and have to be seen before protection equipment intervention. An important conclusion is that the method can improve the monitored network reliability, since the knowledge of the location of a fault allows the energy manager to reduce as much as possible both the area of the network to be disconnected for protection purposes and the time spent by technical staff to recover the abnormal condition and/or the damage. The part of the thesis presenting the results of such a study and activity is structured as follows: chapter 3 deals with the propagation of electromagnetic transients in power systems by defining characteristics and causes of the phenomena and briefly reporting the theory and approaches used to study transients propagation. Then the state of the art concerning methods to detect and locate faults in distribution networks is presented. Finally the attention is paid on the particular technique adopted for the same purpose during the thesis, and the methods developed on the basis of such approach. Chapter 4 reports the configuration of the distribution networks on which the fault location method has been applied by means of simulations as well as the results obtained case by case. In this way the performance featured by the location procedure firstly in ideal then in realistic operating conditions are tested. In chapter 5 the measurement system designed to implement the transients detection and fault location method is presented. The hardware belonging to the measurement chain of every acquisition channel in remote stations is described. Then, the global measurement system is characterized by considering the non ideal aspects of each device that can concur to the final combined uncertainty on the estimated position of the fault in the network under test. Finally, such parameter is computed according to the Guide to the Expression of Uncertainty in Measurements, by means of a numeric procedure. In the last chapter a device is described that has been designed and realized during the PhD activity aiming at substituting the commercial capacitive voltage divider belonging to the conditioning block of the measurement chain. Such a study has been carried out aiming at providing an alternative to the used transducer that could feature equivalent performance and lower cost. In this way, the economical impact of the investment associated to the whole measurement system would be significantly reduced, making the method application much more feasible.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Today, third generation networks are consolidated realities, and user expectations on new applications and services are becoming higher and higher. Therefore, new systems and technologies are necessary to move towards the market needs and the user requirements. This has driven the development of fourth generation networks. ”Wireless network for the fourth generation” is the expression used to describe the next step in wireless communications. There is no formal definition for what these fourth generation networks are; however, we can say that the next generation networks will be based on the coexistence of heterogeneous networks, on the integration with the existing radio access network (e.g. GPRS, UMTS, WIFI, ...) and, in particular, on new emerging architectures that are obtaining more and more relevance, as Wireless Ad Hoc and Sensor Networks (WASN). Thanks to their characteristics, fourth generation wireless systems will be able to offer custom-made solutions and applications personalized according to the user requirements; they will offer all types of services at an affordable cost, and solutions characterized by flexibility, scalability and reconfigurability. This PhD’s work has been focused on WASNs, autoconfiguring networks which are not based on a fixed infrastructure, but are characterized by being infrastructure less, where devices have to automatically generate the network in the initial phase, and maintain it through reconfiguration procedures (if nodes’ mobility, or energy drain, etc..., cause disconnections). The main part of the PhD activity has been focused on an analytical study on connectivity models for wireless ad hoc and sensor networks, nevertheless a small part of my work was experimental. Anyway, both the theoretical and experimental activities have had a common aim, related to the performance evaluation of WASNs. Concerning the theoretical analysis, the objective of the connectivity studies has been the evaluation of models for the interference estimation. This is due to the fact that interference is the most important performance degradation cause in WASNs. As a consequence, is very important to find an accurate model that allows its investigation, and I’ve tried to obtain a model the most realistic and general as possible, in particular for the evaluation of the interference coming from bounded interfering areas (i.e. a WiFi hot spot, a wireless covered research laboratory, ...). On the other hand, the experimental activity has led to Throughput and Packet Error Rare measurements on a real IEEE802.15.4 Wireless Sensor Network.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The scale down of transistor technology allows microelectronics manufacturers such as Intel and IBM to build always more sophisticated systems on a single microchip. The classical interconnection solutions based on shared buses or direct connections between the modules of the chip are becoming obsolete as they struggle to sustain the increasing tight bandwidth and latency constraints that these systems demand. The most promising solution for the future chip interconnects are the Networks on Chip (NoC). NoCs are network composed by routers and channels used to inter- connect the different components installed on the single microchip. Examples of advanced processors based on NoC interconnects are the IBM Cell processor, composed by eight CPUs that is installed on the Sony Playstation III and the Intel Teraflops pro ject composed by 80 independent (simple) microprocessors. On chip integration is becoming popular not only in the Chip Multi Processor (CMP) research area but also in the wider and more heterogeneous world of Systems on Chip (SoC). SoC comprehend all the electronic devices that surround us such as cell-phones, smart-phones, house embedded systems, automotive systems, set-top boxes etc... SoC manufacturers such as ST Microelectronics , Samsung, Philips and also Universities such as Bologna University, M.I.T., Berkeley and more are all proposing proprietary frameworks based on NoC interconnects. These frameworks help engineers in the switch of design methodology and speed up the development of new NoC-based systems on chip. In this Thesis we propose an introduction of CMP and SoC interconnection networks. Then focusing on SoC systems we propose: • a detailed analysis based on simulation of the Spidergon NoC, a ST Microelectronics solution for SoC interconnects. The Spidergon NoC differs from many classical solutions inherited from the parallel computing world. Here we propose a detailed analysis of this NoC topology and routing algorithms. Furthermore we propose aEqualized a new routing algorithm designed to optimize the use of the resources of the network while also increasing its performance; • a methodology flow based on modified publicly available tools that combined can be used to design, model and analyze any kind of System on Chip; • a detailed analysis of a ST Microelectronics-proprietary transport-level protocol that the author of this Thesis helped developing; • a simulation-based comprehensive comparison of different network interface designs proposed by the author and the researchers at AST lab, in order to integrate shared-memory and message-passing based components on a single System on Chip; • a powerful and flexible solution to address the time closure exception issue in the design of synchronous Networks on Chip. Our solution is based on relay stations repeaters and allows to reduce the power and area demands of NoC interconnects while also reducing its buffer needs; • a solution to simplify the design of the NoC by also increasing their performance and reducing their power and area consumption. We propose to replace complex and slow virtual channel-based routers with multiple and flexible small Multi Plane ones. This solution allows us to reduce the area and power dissipation of any NoC while also increasing its performance especially when the resources are reduced. This Thesis has been written in collaboration with the Advanced System Technology laboratory in Grenoble France, and the Computer Science Department at Columbia University in the city of New York.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The present PhD thesis summarizes the three-years study about the neutronic investigation of a new concept nuclear reactor aiming at the optimization and the sustainable management of nuclear fuel in a possible European scenario. A new generation nuclear reactor for the nuclear reinassance is indeed desired by the actual industrialized world, both for the solution of the energetic question arising from the continuously growing energy demand together with the corresponding reduction of oil availability, and the environment question for a sustainable energy source free from Long Lived Radioisotopes and therefore geological repositories. Among the Generation IV candidate typologies, the Lead Fast Reactor concept has been pursued, being the one top rated in sustainability. The European Lead-cooled SYstem (ELSY) has been at first investigated. The neutronic analysis of the ELSY core has been performed via deterministic analysis by means of the ERANOS code, in order to retrieve a stable configuration for the overall design of the reactor. Further analyses have been carried out by means of the Monte Carlo general purpose transport code MCNP, in order to check the former one and to define an exact model of the system. An innovative system of absorbers has been conceptualized and designed for both the reactivity compensation and regulation of the core due to cycle swing, as well as for safety in order to guarantee the cold shutdown of the system in case of accident. Aiming at the sustainability of nuclear energy, the steady-state nuclear equilibrium has been investigated and generalized into the definition of the ``extended'' equilibrium state. According to this, the Adiabatic Reactor Theory has been developed, together with a New Paradigm for Nuclear Power: in order to design a reactor that does not exchange with the environment anything valuable (thus the term ``adiabatic''), in the sense of both Plutonium and Minor Actinides, it is required indeed to revert the logical design scheme of nuclear cores, starting from the definition of the equilibrium composition of the fuel and submitting to the latter the whole core design. The New Paradigm has been applied then to the core design of an Adiabatic Lead Fast Reactor complying with the ELSY overall system layout. A complete core characterization has been done in order to asses criticality and power flattening; a preliminary evaluation of the main safety parameters has been also done to verify the viability of the system. Burn up calculations have been then performed in order to investigate the operating cycle for the Adiabatic Lead Fast Reactor; the fuel performances have been therefore extracted and inserted in a more general analysis for an European scenario. The present nuclear reactors fleet has been modeled and its evolution simulated by means of the COSI code in order to investigate the materials fluxes to be managed in the European region. Different plausible scenarios have been identified to forecast the evolution of the European nuclear energy production, including the one involving the introduction of Adiabatic Lead Fast Reactors, and compared to better analyze the advantages introduced by the adoption of new concept reactors. At last, since both ELSY and the ALFR represent new concept systems based upon innovative solutions, the neutronic design of a demonstrator reactor has been carried out: such a system is intended to prove the viability of technology to be implemented in the First-of-a-Kind industrial power plant, with the aim at attesting the general strategy to use, to the largest extent. It was chosen then to base the DEMO design upon a compromise between demonstration of developed technology and testing of emerging technology in order to significantly subserve the purpose of reducing uncertainties about construction and licensing, both validating ELSY/ALFR main features and performances, and to qualify numerical codes and tools.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Broad consensus has been reached within the Education and Cognitive Psychology research communities on the need to center the learning process on experimentation and concrete application of knowledge, rather than on a bare transfer of notions. Several advantages arise from this educational approach, ranging from the reinforce of students learning, to the increased opportunity for a student to gain greater insight into the studied topics, up to the possibility for learners to acquire practical skills and long-lasting proficiency. This is especially true in Engineering education, where integrating conceptual knowledge and practical skills assumes a strategic importance. In this scenario, learners are called to play a primary role. They are actively involved in the construction of their own knowledge, instead of passively receiving it. As a result, traditional, teacher-centered learning environments should be replaced by novel learner-centered solutions. Information and Communication Technologies enable the development of innovative solutions that provide suitable answers to the need for the availability of experimentation supports in educational context. Virtual Laboratories, Adaptive Web-Based Educational Systems and Computer-Supported Collaborative Learning environments can significantly foster different learner-centered instructional strategies, offering the opportunity to enhance personalization, individualization and cooperation. More specifically, they allow students to explore different kinds of materials, to access and compare several information sources, to face real or realistic problems and to work on authentic and multi-facet case studies. In addition, they encourage cooperation among peers and provide support through coached and scaffolded activities aimed at fostering reflection and meta-cognitive reasoning. This dissertation will guide readers within this research field, presenting both the theoretical and applicative results of a research aimed at designing an open, flexible, learner-centered virtual lab for supporting students in learning Information Security.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The "sustainability" concept relates to the prolonging of human economic systems with as little detrimental impact on ecological systems as possible. Construction that exhibits good environmental stewardship and practices that conserve resources in a manner that allow growth and development to be sustained for the long-term without degrading the environment are indispensable in a developed society. Past, current and future advancements in asphalt as an environmentally sustainable paving material are especially important because the quantities of asphalt used annually in Europe as well as in the U.S. are large. The asphalt industry is still developing technological improvements that will reduce the environmental impact without affecting the final mechanical performance. Warm mix asphalt (WMA) is a type of asphalt mix requiring lower production temperatures compared to hot mix asphalt (HMA), while aiming to maintain the desired post construction properties of traditional HMA. Lowering the production temperature reduce the fuel usage and the production of emissions therefore and that improve conditions for workers and supports the sustainable development. Even the crumb-rubber modifier (CRM), with shredded automobile tires and used in the United States since the mid 1980s, has proven to be an environmentally friendly alternative to conventional asphalt pavement. Furthermore, the use of waste tires is not only relevant in an environmental aspect but also for the engineering properties of asphalt [Pennisi E., 1992]. This research project is aimed to demonstrate the dual value of these Asphalt Mixes in regards to the environmental and mechanical performance and to suggest a low environmental impact design procedure. In fact, the use of eco-friendly materials is the first phase towards an eco-compatible design but it cannot be the only step. The eco-compatible approach should be extended also to the design method and material characterization because only with these phases is it possible to exploit the maximum potential properties of the used materials. Appropriate asphalt concrete characterization is essential and vital for realistic performance prediction of asphalt concrete pavements. Volumetric (Mix design) and mechanical (Permanent deformation and Fatigue performance) properties are important factors to consider. Moreover, an advanced and efficient design method is necessary in order to correctly use the material. A design method such as a Mechanistic-Empirical approach, consisting of a structural model capable of predicting the state of stresses and strains within the pavement structure under the different traffic and environmental conditions, was the application of choice. In particular this study focus on the CalME and its Incremental-Recursive (I-R) procedure, based on damage models for fatigue and permanent shear strain related to the surface cracking and to the rutting respectively. It works in increments of time and, using the output from one increment, recursively, as input to the next increment, predicts the pavement conditions in terms of layer moduli, fatigue cracking, rutting and roughness. This software procedure was adopted in order to verify the mechanical properties of the study mixes and the reciprocal relationship between surface layer and pavement structure in terms of fatigue and permanent deformation with defined traffic and environmental conditions. The asphalt mixes studied were used in a pavement structure as surface layer of 60 mm thickness. The performance of the pavement was compared to the performance of the same pavement structure where different kinds of asphalt concrete were used as surface layer. In comparison to a conventional asphalt concrete, three eco-friendly materials, two warm mix asphalt and a rubberized asphalt concrete, were analyzed. The First Two Chapters summarize the necessary steps aimed to satisfy the sustainable pavement design procedure. In Chapter I the problem of asphalt pavement eco-compatible design was introduced. The low environmental impact materials such as the Warm Mix Asphalt and the Rubberized Asphalt Concrete were described in detail. In addition the value of a rational asphalt pavement design method was discussed. Chapter II underlines the importance of a deep laboratory characterization based on appropriate materials selection and performance evaluation. In Chapter III, CalME is introduced trough a specific explanation of the different equipped design approaches and specifically explaining the I-R procedure. In Chapter IV, the experimental program is presented with a explanation of test laboratory devices adopted. The Fatigue and Rutting performances of the study mixes are shown respectively in Chapter V and VI. Through these laboratory test data the CalME I-R models parameters for Master Curve, fatigue damage and permanent shear strain were evaluated. Lastly, in Chapter VII, the results of the asphalt pavement structures simulations with different surface layers were reported. For each pavement structure, the total surface cracking, the total rutting, the fatigue damage and the rutting depth in each bound layer were analyzed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

To date the hospital radiological workflow is completing a transition from analog to digital technology. Since the X-rays digital detection technologies have become mature, hospitals are trading on the natural devices turnover to replace the conventional screen film devices with digital ones. The transition process is complex and involves not just the equipment replacement but also new arrangements for image transmission, display (and reporting) and storage. This work is focused on 2D digital detector’s characterization with a concern to specific clinical application; the systems features linked to the image quality are analyzed to assess the clinical performances, the conversion efficiency, and the minimum dose necessary to get an acceptable image. The first section overviews the digital detector technologies focusing on the recent and promising technological developments. The second section contains a description of the characterization methods considered in this thesis categorized in physical, psychophysical and clinical; theory, models and procedures are described as well. The third section contains a set of characterizations performed on new equipments that appears to be some of the most advanced technologies available to date. The fourth section deals with some procedures and schemes employed for quality assurance programs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis presents the outcomes of my Ph.D. course in telecommunications engineering. The focus of my research has been on Global Navigation Satellite Systems (GNSS) and in particular on the design of aiding schemes operating both at position and physical level and the evaluation of their feasibility and advantages. Assistance techniques at the position level are considered to enhance receiver availability in challenging scenarios where satellite visibility is limited. Novel positioning techniques relying on peer-to-peer interaction and exchange of information are thus introduced. More specifically two different techniques are proposed: the Pseudorange Sharing Algorithm (PSA), based on the exchange of GNSS data, that allows to obtain coarse positioning where the user has scarce satellite visibility, and the Hybrid approach, which also permits to improve the accuracy of the positioning solution. At the physical level, aiding schemes are investigated to improve the receiver’s ability to synchronize with satellite signals. An innovative code acquisition strategy for dual-band receivers, the Cross-Band Aiding (CBA) technique, is introduced to speed-up initial synchronization by exploiting the exchange of time references between the two bands. In addition vector configurations for code tracking are analyzed and their feedback generation process thoroughly investigated.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The identification of people by measuring some traits of individual anatomy or physiology has led to a specific research area called biometric recognition. This thesis is focused on improving fingerprint recognition systems considering three important problems: fingerprint enhancement, fingerprint orientation extraction and automatic evaluation of fingerprint algorithms. An effective extraction of salient fingerprint features depends on the quality of the input fingerprint. If the fingerprint is very noisy, we are not able to detect a reliable set of features. A new fingerprint enhancement method, which is both iterative and contextual, is proposed. This approach detects high-quality regions in fingerprints, selectively applies contextual filtering and iteratively expands like wildfire toward low-quality ones. A precise estimation of the orientation field would greatly simplify the estimation of other fingerprint features (singular points, minutiae) and improve the performance of a fingerprint recognition system. The fingerprint orientation extraction is improved following two directions. First, after the introduction of a new taxonomy of fingerprint orientation extraction methods, several variants of baseline methods are implemented and, pointing out the role of pre- and post- processing, we show how to improve the extraction. Second, the introduction of a new hybrid orientation extraction method, which follows an adaptive scheme, allows to improve significantly the orientation extraction in noisy fingerprints. Scientific papers typically propose recognition systems that integrate many modules and therefore an automatic evaluation of fingerprint algorithms is needed to isolate the contributions that determine an actual progress in the state-of-the-art. The lack of a publicly available framework to compare fingerprint orientation extraction algorithms, motivates the introduction of a new benchmark area called FOE (including fingerprints and manually-marked orientation ground-truth) along with fingerprint matching benchmarks in the FVC-onGoing framework. The success of such framework is discussed by providing relevant statistics: more than 1450 algorithms submitted and two international competitions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

It has been proved that naphthalene diimide (NDI) derivatives display anticancer properties as intercalators and G-quadruplex-binding ligands, leading to DNA damage, senescence and down-regulation of oncogene expression. This thesis deals with the design and synthesis of disubstituted and tetrasubstituted NDI derivatives endowed with anticancer activity, interacting with DNA together with other targets implicated in cancer development. Disubstituted NDI compounds have been designed with the aim to provide potential multitarget directed ligands (MTDLs), in order to create molecules able to simultaneously interact with some of the different targets involved in this pathology. The most active compound, displayed antiproliferative activity in submicromolar range, especially against colon and prostate cancer cell lines, the ability to bind duplex and quadruplex DNA, to inhibit Taq polymerase and telomerase, to trigger caspase activation by a possible oxidative mechanism, to downregulate ERK 2 protein and to inhibit ERKs phosphorylation, without acting directly on microtubules and tubuline. Tetrasubstituted NDI compounds have been designed as G-quadruplex-binding ligands endowed with anticancer activity. In order to improve the cellular uptake of the lead compound, the N-methylpiperazine moiety have been replaced with different aromatic systems and methoxypropyl groups. The most interesting compound was 1d, which was able to interact with the G-quadruplexes both telomeric and in HSP90 promoter region, and it has been co-crystallized with the human telomeric G-quadruplex, to directly verify its ability to bind this kind of structure, and also to investigate its binding mode. All the morpholino substituted compounds show antiproliferative activity in submicromolar values mainly in pancreatic and lung cancer cell lines, and they show an improved biological profile in comparison with that of the lead compound. In conclusion, both these studies, may represent a promising starting point for the development of new interesting molecules useful for the treatment of cancer, underlining the versatility of the NDI scaffold.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The continuous advancements and enhancements of wireless systems are enabling new compelling scenarios where mobile services can adapt according to the current execution context, represented by the computational resources available at the local device, current physical location, people in physical proximity, and so forth. Such services called context-aware require the timely delivery of all relevant information describing the current context, and that introduces several unsolved complexities, spanning from low-level context data transmission up to context data storage and replication into the mobile system. In addition, to ensure correct and scalable context provisioning, it is crucial to integrate and interoperate with different wireless technologies (WiFi, Bluetooth, etc.) and modes (infrastructure-based and ad-hoc), and to use decentralized solutions to store and replicate context data on mobile devices. These challenges call for novel middleware solutions, here called Context Data Distribution Infrastructures (CDDIs), capable of delivering relevant context data to mobile devices, while hiding all the issues introduced by data distribution in heterogeneous and large-scale mobile settings. This dissertation thoroughly analyzes CDDIs for mobile systems, with the main goal of achieving a holistic approach to the design of such type of middleware solutions. We discuss the main functions needed by context data distribution in large mobile systems, and we claim the precise definition and clean respect of quality-based contracts between context consumers and CDDI to reconfigure main middleware components at runtime. We present the design and the implementation of our proposals, both in simulation-based and in real-world scenarios, along with an extensive evaluation that confirms the technical soundness of proposed CDDI solutions. Finally, we consider three highly heterogeneous scenarios, namely disaster areas, smart campuses, and smart cities, to better remark the wide technical validity of our analysis and solutions under different network deployments and quality constraints.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Photovoltaic (PV) solar panels generally produce electricity in the 6% to 16% efficiency range, the rest being dissipated in thermal losses. To recover this amount, hybrid photovoltaic thermal systems (PVT) have been devised. These are devices that simultaneously convert solar energy into electricity and heat. It is thus interesting to study the PVT system globally from different point of views in order to evaluate advantages and disadvantages of this technology and its possible uses. In particular in Chapter II, the development of the PVT absorber numerical optimization by a genetic algorithm has been carried out analyzing different internal channel profiles in order to find a right compromise between performance and technical and economical feasibility. Therefore in Chapter III ,thanks to a mobile structure built into the university lab, it has been compared experimentally electrical and thermal output power from PVT panels with separated photovoltaic and solar thermal productions. Collecting a lot of experimental data based on different seasonal conditions (ambient temperature,irradiation, wind...),the aim of this mobile structure has been to evaluate average both thermal and electrical increasing and decreasing efficiency values obtained respect to separate productions through the year. In Chapter IV , new PVT and solar thermal equation based models in steady state conditions have been developed by software Dymola that uses Modelica language. This permits ,in a simplified way respect to previous system modelling softwares, to model and evaluate different concepts about PVT panel regarding its structure before prototyping and measuring it. Chapter V concerns instead the definition of PVT boundary conditions into a HVAC system . This was made trough year simulations by software Polysun in order to finally assess the best solar assisted integrated structure thanks to F_save(solar saving energy)factor. Finally, Chapter VI presents the conclusion and the perspectives of this PhD work.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

During the last decade peach and nectarine fruit have lost considerable market share, due to increased consumer dissatisfaction with quality at retail markets. This is mainly due to harvesting of too immature fruit and high ripening heterogeneity. The main problem is that the traditional used maturity indexes are not able to objectively detect fruit maturity stage, neither the variability present in the field, leading to a difficult post-harvest management of the product and to high fruit losses. To assess more precisely the fruit ripening other techniques and devices can be used. Recently, a new non-destructive maturity index, based on the vis-NIR technology, the Index of Absorbance Difference (IAD), that correlates with fruit degreening and ethylene production, was introduced and the IAD was used to study peach and nectarine fruit ripening from the “field to the fork”. In order to choose the best techniques to improve fruit quality, a detailed description of the tree structure, of fruit distribution and ripening evolution on the tree was faced. More in details, an architectural model (PlantToon®) was used to design the tree structure and the IAD was applied to characterize the maturity stage of each fruit. Their combined use provided an objective and precise evaluation of the fruit ripening variability, related to different training systems, crop load, fruit exposure and internal temperature. Based on simple field assessment of fruit maturity (as IAD) and growth, a model for an early prediction of harvest date and yield, was developed and validated. The relationship between the non-destructive maturity IAD, and the fruit shelf-life, was also confirmed. Finally the obtained results were validated by consumer test: the fruit sorted in different maturity classes obtained a different consumer acceptance. The improved knowledge, leaded to an innovative management of peach and nectarine fruit, from “field to market”.