6 resultados para Best-case scenario
em AMS Tesi di Dottorato - Alm@DL - Università di Bologna
Resumo:
Constructing ontology networks typically occurs at design time at the hands of knowledge engineers who assemble their components statically. There are, however, use cases where ontology networks need to be assembled upon request and processed at runtime, without altering the stored ontologies and without tampering with one another. These are what we call "virtual [ontology] networks", and keeping track of how an ontology changes in each virtual network is called "multiplexing". Issues may arise from the connectivity of ontology networks. In many cases, simple flat import schemes will not work, because many ontology managers can cause property assertions to be erroneously interpreted as annotations and ignored by reasoners. Also, multiple virtual networks should optimize their cumulative memory footprint, and where they cannot, this should occur for very limited periods of time. We claim that these problems should be handled by the software that serves these ontology networks, rather than by ontology engineering methodologies. We propose a method that spreads multiple virtual networks across a 3-tier structure, and can reduce the amount of erroneously interpreted axioms, under certain raw statement distributions across the ontologies. We assumed OWL as the core language handled by semantic applications in the framework at hand, due to the greater availability of reasoners and rule engines. We also verified that, in common OWL ontology management software, OWL axiom interpretation occurs in the worst case scenario of pre-order visit. To measure the effectiveness and space-efficiency of our solution, a Java and RESTful implementation was produced within an Apache project. We verified that a 3-tier structure can accommodate reasonably complex ontology networks better, in terms of the expressivity OWL axiom interpretation, than flat-tree import schemes can. We measured both the memory overhead of the additional components we put on top of traditional ontology networks, and the framework's caching capabilities.
Resumo:
The wide diffusion of cheap, small, and portable sensors integrated in an unprecedented large variety of devices and the availability of almost ubiquitous Internet connectivity make it possible to collect an unprecedented amount of real time information about the environment we live in. These data streams, if properly and timely analyzed, can be exploited to build new intelligent and pervasive services that have the potential of improving people's quality of life in a variety of cross concerning domains such as entertainment, health-care, or energy management. The large heterogeneity of application domains, however, calls for a middleware-level infrastructure that can effectively support their different quality requirements. In this thesis we study the challenges related to the provisioning of differentiated quality-of-service (QoS) during the processing of data streams produced in pervasive environments. We analyze the trade-offs between guaranteed quality, cost, and scalability in streams distribution and processing by surveying existing state-of-the-art solutions and identifying and exploring their weaknesses. We propose an original model for QoS-centric distributed stream processing in data centers and we present Quasit, its prototype implementation offering a scalable and extensible platform that can be used by researchers to implement and validate novel QoS-enforcement mechanisms. To support our study, we also explore an original class of weaker quality guarantees that can reduce costs when application semantics do not require strict quality enforcement. We validate the effectiveness of this idea in a practical use-case scenario that investigates partial fault-tolerance policies in stream processing by performing a large experimental study on the prototype of our novel LAAR dynamic replication technique. Our modeling, prototyping, and experimental work demonstrates that, by providing data distribution and processing middleware with application-level knowledge of the different quality requirements associated to different pervasive data flows, it is possible to improve system scalability while reducing costs.
Resumo:
Seyfert galaxies are the closest active galactic nuclei. As such, we can use
them to test the physical properties of the entire class of objects. To investigate
their general properties, I took advantage of different methods of data analysis. In
particular I used three different samples of objects, that, despite frequent overlaps,
have been chosen to best tackle different topics: the heterogeneous BeppoS AX
sample was thought to be optimized to test the average hard X-ray (E above 10 keV)
properties of nearby Seyfert galaxies; the X-CfA was thought the be optimized to
compare the properties of low-luminosity sources to the ones of higher luminosity
and, thus, it was also used to test the emission mechanism models; finally, the
XMM–Newton sample was extracted from the X-CfA sample so as to ensure a
truly unbiased and well defined sample of objects to define the average properties
of Seyfert galaxies.
Taking advantage of the broad-band coverage of the BeppoS AX MECS and
PDS instruments (between ~2-100 keV), I infer the average X-ray spectral propertiesof nearby Seyfert galaxies and in particular the photon index (
Resumo:
The present PhD thesis summarizes the three-years study about the neutronic investigation of a new concept nuclear reactor aiming at the optimization and the sustainable management of nuclear fuel in a possible European scenario. A new generation nuclear reactor for the nuclear reinassance is indeed desired by the actual industrialized world, both for the solution of the energetic question arising from the continuously growing energy demand together with the corresponding reduction of oil availability, and the environment question for a sustainable energy source free from Long Lived Radioisotopes and therefore geological repositories. Among the Generation IV candidate typologies, the Lead Fast Reactor concept has been pursued, being the one top rated in sustainability. The European Lead-cooled SYstem (ELSY) has been at first investigated. The neutronic analysis of the ELSY core has been performed via deterministic analysis by means of the ERANOS code, in order to retrieve a stable configuration for the overall design of the reactor. Further analyses have been carried out by means of the Monte Carlo general purpose transport code MCNP, in order to check the former one and to define an exact model of the system. An innovative system of absorbers has been conceptualized and designed for both the reactivity compensation and regulation of the core due to cycle swing, as well as for safety in order to guarantee the cold shutdown of the system in case of accident. Aiming at the sustainability of nuclear energy, the steady-state nuclear equilibrium has been investigated and generalized into the definition of the ``extended'' equilibrium state. According to this, the Adiabatic Reactor Theory has been developed, together with a New Paradigm for Nuclear Power: in order to design a reactor that does not exchange with the environment anything valuable (thus the term ``adiabatic''), in the sense of both Plutonium and Minor Actinides, it is required indeed to revert the logical design scheme of nuclear cores, starting from the definition of the equilibrium composition of the fuel and submitting to the latter the whole core design. The New Paradigm has been applied then to the core design of an Adiabatic Lead Fast Reactor complying with the ELSY overall system layout. A complete core characterization has been done in order to asses criticality and power flattening; a preliminary evaluation of the main safety parameters has been also done to verify the viability of the system. Burn up calculations have been then performed in order to investigate the operating cycle for the Adiabatic Lead Fast Reactor; the fuel performances have been therefore extracted and inserted in a more general analysis for an European scenario. The present nuclear reactors fleet has been modeled and its evolution simulated by means of the COSI code in order to investigate the materials fluxes to be managed in the European region. Different plausible scenarios have been identified to forecast the evolution of the European nuclear energy production, including the one involving the introduction of Adiabatic Lead Fast Reactors, and compared to better analyze the advantages introduced by the adoption of new concept reactors. At last, since both ELSY and the ALFR represent new concept systems based upon innovative solutions, the neutronic design of a demonstrator reactor has been carried out: such a system is intended to prove the viability of technology to be implemented in the First-of-a-Kind industrial power plant, with the aim at attesting the general strategy to use, to the largest extent. It was chosen then to base the DEMO design upon a compromise between demonstration of developed technology and testing of emerging technology in order to significantly subserve the purpose of reducing uncertainties about construction and licensing, both validating ELSY/ALFR main features and performances, and to qualify numerical codes and tools.
Resumo:
Different tools have been used to set up and adopt the model for the fulfillment of the objective of this research. 1. The Model The base model that has been used is the Analytical Hierarchy Process (AHP) adapted with the aim to perform a Benefit Cost Analysis. The AHP developed by Thomas Saaty is a multicriteria decision - making technique which decomposes a complex problem into a hierarchy. It is used to derive ratio scales from both discreet and continuous paired comparisons in multilevel hierarchic structures. These comparisons may be taken from actual measurements or from a fundamental scale that reflects the relative strength of preferences and feelings. 2. Tools and methods 2.1. The Expert Choice Software The software Expert Choice is a tool that allows each operator to easily implement the AHP model in every stage of the problem. 2.2. Personal Interviews to the farms For this research, the farms of the region Emilia Romagna certified EMAS have been detected. Information has been given by EMAS center in Wien. Personal interviews have been carried out to each farm in order to have a complete and realistic judgment of each criteria of the hierarchy. 2.3. Questionnaire A supporting questionnaire has also been delivered and used for the interviews . 3. Elaboration of the data After data collection, the data elaboration has taken place. The software support Expert Choice has been used . 4. Results of the Analysis The result of the figures above (vedere altro documento) gives a series of numbers which are fractions of the unit. This has to be interpreted as the relative contribution of each element to the fulfillment of the relative objective. So calculating the Benefits/costs ratio for each alternative the following will be obtained: Alternative One: Implement EMAS Benefits ratio: 0, 877 Costs ratio: 0, 815 Benfit/Cost ratio: 0,877/0,815=1,08 Alternative Two: Not Implement EMAS Benefits ratio: 0,123 Costs ration: 0,185 Benefit/Cost ratio: 0,123/0,185=0,66 As stated above, the alternative with the highest ratio will be the best solution for the organization. This means that the research carried out and the model implemented suggests that EMAS adoption in the agricultural sector is the best alternative. It has to be noted that the ratio is 1,08 which is a relatively low positive value. This shows the fragility of this conclusion and suggests a careful exam of the benefits and costs for each farm before adopting the scheme. On the other part, the result needs to be taken in consideration by the policy makers in order to enhance their intervention regarding the scheme adoption on the agricultural sector. According to the AHP elaboration of judgments we have the following main considerations on Benefits: - Legal compliance seems to be the most important benefit for the agricultural sector since its rank is 0,471 - The next two most important benefits are Improved internal organization (ranking 0,230) followed by Competitive advantage (ranking 0, 221) mostly due to the sub-element Improved image (ranking 0,743) Finally, even though Incentives are not ranked among the most important elements, the financial ones seem to have been decisive on the decision making process. According to the AHP elaboration of judgments we have the following main considerations on Costs: - External costs seem to be largely more important than the internal ones (ranking 0, 857 over 0,143) suggesting that Emas costs over consultancy and verification remain the biggest obstacle. - The implementation of the EMS is the most challenging element regarding the internal costs (ranking 0,750).
Resumo:
In the era of the Internet of Everything, a user with a handheld or wearable device equipped with sensing capability has become a producer as well as a consumer of information and services. The more powerful these devices get, the more likely it is that they will generate and share content locally, leading to the presence of distributed information sources and the diminishing role of centralized servers. As of current practice, we rely on infrastructure acting as an intermediary, providing access to the data. However, infrastructure-based connectivity might not always be available or the best alternative. Moreover, it is often the case where the data and the processes acting upon them are of local scopus. Answers to a query about a nearby object, an information source, a process, an experience, an ability, etc. could be answered locally without reliance on infrastructure-based platforms. The data might have temporal validity limited to or bounded to a geographical area and/or the social context where the user is immersed in. In this envisioned scenario users could interact locally without the need for a central authority, hence, the claim of an infrastructure-less, provider-less platform. The data is owned by the users and consulted locally as opposed to the current approach of making them available globally and stay on forever. From a technical viewpoint, this network resembles a Delay/Disruption Tolerant Network where consumers and producers might be spatially and temporally decoupled exchanging information with each other in an adhoc fashion. To this end, we propose some novel data gathering and dissemination strategies for use in urban-wide environments which do not rely on strict infrastructure mediation. While preserving the general aspects of our study and without loss of generality, we focus our attention toward practical applicative scenarios which help us capture the characteristics of opportunistic communication networks.