9 resultados para Information by segment

em AMS Tesi di Dottorato - Alm@DL - Università di Bologna


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Traditional logic gates are rapidly reaching the limits of miniaturization. Overheating of these components is no longer negligible. A new physical approach to the machine was proposed by Prof. C S. Lent “Molecular Quantum cellular automata”. Indeed the quantum-dot cellular automata (QCA) approach offers an attractive alternative to diode or transistor devices. Th units encode binary information by two polarizations without corrent flow. The units for QCA theory are called QCA cells and can be realized in several way. Molecules can act as QCA cells at room temperature. In collaboration with STMicroelectronic, the group of Electrochemistry of Prof. Paolucci and the Nananotecnology laboratory from Lecce, we synthesized and studied with many techniques surface-active chiral bis-ferrocenes, conveniently designed in order to act as prototypical units for molecular computing devices. The chemistry of ferrocene has been studied thoroughly and found the opportunity to promote substitution reaction of a ferrocenyl alcohols with various nucleophiles without the aid of Lewis acid as catalysts. The only interaction between water and the two reagents is involve in the formation of a carbocation specie which is the true reactive species. We have generalized this concept to other benzyl alcohols which generating stabilized carbocations. Carbocation describe in Mayr’s scale were fondametal for our research. Finally, we used these alcohols to alkylate in enantioselective way aldehydes via organocatalysis.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This thesis adresses the problem of localization, and analyzes its crucial aspects, within the context of cooperative WSNs. The three main issues discussed in the following are: network synchronization, position estimate and tracking. Time synchronization is a fundamental requirement for every network. In this context, a new approach based on the estimation theory is proposed to evaluate the ultimate performance limit in network time synchronization. In particular the lower bound on the variance of the average synchronization error in a fully connected network is derived by taking into account the statistical characterization of the Message Delivering Time (MDT) . Sensor network localization algorithms estimate the locations of sensors with initially unknown location information by using knowledge of the absolute positions of a few sensors and inter-sensor measurements such as distance and bearing measurements. Concerning this issue, i.e. the position estimate problem, two main contributions are given. The first is a new Semidefinite Programming (SDP) framework to analyze and solve the problem of flip-ambiguity that afflicts range-based network localization algorithms with incomplete ranging information. The occurrence of flip-ambiguous nodes and errors due to flip ambiguity is studied, then with this information a new SDP formulation of the localization problem is built. Finally a flip-ambiguity-robust network localization algorithm is derived and its performance is studied by Monte-Carlo simulations. The second contribution in the field of position estimate is about multihop networks. A multihop network is a network with a low degree of connectivity, in which couples of given any nodes, in order to communicate, they have to rely on one or more intermediate nodes (hops). Two new distance-based source localization algorithms, highly robust to distance overestimates, typically present in multihop networks, are presented and studied. The last point of this thesis discuss a new low-complexity tracking algorithm, inspired by the Fano’s sequential decoding algorithm for the position tracking of a user in a WLAN-based indoor localization system.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This book is dedicated to the Law and Economics analysis of civil liability of securities underwriters for the damage caused by material misstatements of corporate information by securities issuers. It seeks to answer a series of important questions. Who the are underwriters and what is their main role in the securities offering? Why there is a need for legal intervention in the underwriting market? What is so special about civil liability as an enforcement tool? How is civil liability used in a real world and does it really reach its goals? Finally, is there a need for a change and, if so, by what means?

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In the last few years, a new generation of Business Intelligence (BI) tools called BI 2.0 has emerged to meet the new and ambitious requirements of business users. BI 2.0 not only introduces brand new topics, but in some cases it re-examines past challenges according to new perspectives depending on the market changes and needs. In this context, the term pervasive BI has gained increasing interest as an innovative and forward-looking perspective. This thesis investigates three different aspects of pervasive BI: personalization, timeliness, and integration. Personalization refers to the capacity of BI tools to customize the query result according to the user who takes advantage of it, facilitating the fruition of BI information by different type of users (e.g., front-line employees, suppliers, customers, or business partners). In this direction, the thesis proposes a model for On-Line Analytical Process (OLAP) query personalization to reduce the query result to the most relevant information for the specific user. Timeliness refers to the timely provision of business information for decision-making. In this direction, this thesis defines a new Data Warehuose (DW) methodology, Four-Wheel-Drive (4WD), that combines traditional development approaches with agile methods; the aim is to accelerate the project development and reduce the software costs, so as to decrease the number of DW project failures and favour the BI tool penetration even in small and medium companies. Integration refers to the ability of BI tools to allow users to access information anywhere it can be found, by using the device they prefer. To this end, this thesis proposes Business Intelligence Network (BIN), a peer-to-peer data warehousing architecture, where a user can formulate an OLAP query on its own system and retrieve relevant information from both its local system and the DWs of the net, preserving its autonomy and independency.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Attraverso uno studio di caso, centrato su Marie-Anne Paulze-Lavoisier (1758-1836), la tesi tratta di pratiche legate all’annotazione e, più in generale, alla registrazione di informazioni su supporti cartacei tra gli anni ’70 del Settecento e gli anni ‘30 dell’Ottocento. Nota oggi come moglie e collaboratrice del chimico Antoine-Laurent Lavoisier (1743-1794), di cui avrebbe promosso le teorie attraverso un’attività di traduttrice e illustratrice, Paulze-Lavoisier è qui presentata nella veste di “secrétaire”, ossia di figura incaricata di mettere per iscritto osservazioni di varia natura. Ci si concentra, in particolare, sui Registres de laboratoire di Lavoisier, 14 quaderni petit in-folio a cui sono affidati i resoconti di esperimenti condotti a Parigi tra il 1772 e il 1788 e che rilevano in misura significativa della mano di lei. Oggetto finora di una storia del pensiero scientifico, interessata alle “idee” in essi contenute, i Registres sono da noi riletti alla luce di un approccio “materiale”, attento alle pratiche e alle condizioni della loro compilazione e del loro riutilizzo nel tempo. Si indagano quindi le funzioni di questi quaderni rispetto alla pratica sperimentale, ricostruendo al contempo i tratti e le evoluzioni del ruolo di segretaria affidato a Paulze-Lavoisier. Il focus su Paulze-Lavoisier permette da un lato a sollevare nuove domande in merito al rapporto tra scrittura delle note, genere e sociabilità ma anche, ad un altro livello, alla cosiddetta “invisibilità” degli assistenti; dall’altro offre la materia per una storia lunga dei Registres, che dai primi gesti in essi da lei operati agli inizi degli anni ’70 del Settecento arriva fino alle manipolazioni a cui sono sottoposti, per sua volontà, nei primi decenni dell’Ottocento.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

On May 25, 2018, the EU introduced the General Data Protection Regulation (GDPR) that offers EU citizens a shelter for their personal information by requesting companies to explain how people’s information is used clearly. To comply with the new law, European and non-European companies interacting with EU citizens undertook a massive data re-permission-request campaign. However, if on the one side the EU Regulator was particularly specific in defining the conditions to get customers’ data access, on the other side, it did not specify how the communication between firms and consumers should be designed. This has left firms free to develop their re-permission emails as they liked, plausibly coupling the informative nature of these privacy-related communications with other persuasive techniques to maximize data disclosure. Consequently, we took advantage of this colossal wave of simultaneous requests to provide insights into two issues. Firstly, we investigate how companies across industries and countries chose to frame their requests. Secondly, we investigate which are the factors that influenced the selection of alternative re-permission formats. In order to achieve these goals, we examine the content of a sample of 1506 re-permission emails sent by 1396 firms worldwide, and we identify the dominant “themes” characterizing these emails. We then relate these themes to both the expected benefits firms may derive from data usage and the possible risks they may experience from not being completely compliant to the spirit of the law. Our results show that: (1) most firms enriched their re-permission messages with persuasive arguments aiming at increasing consumers’ likelihood of relinquishing their data; (2) the use of persuasion is the outcome of a difficult tradeoff between costs and benefits; (3) most companies acted in their self-interest and “gamed the system”. Our results have important implications for policymakers, managers, and customers of the online sector.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Today we live in an age where the internet and artificial intelligence allow us to search for information through impressive amounts of data, opening up revolutionary new ways to make sense of reality and understand our world. However, it is still an area of improvement to exploit the full potential of large amounts of explainable information by distilling it automatically in an intuitive and user-centred explanation. For instance, different people (or artificial agents) may search for and request different types of information in a different order, so it is unlikely that a short explanation can suffice for all needs in the most generic case. Moreover, dumping a large portion of explainable information in a one-size-fits-all representation may also be sub-optimal, as the needed information may be scarce and dispersed across hundreds of pages. The aim of this work is to investigate how to automatically generate (user-centred) explanations from heterogeneous and large collections of data, with a focus on the concept of explanation in a broad sense, as a critical artefact for intelligence, regardless of whether it is human or robotic. Our approach builds on and extends Achinstein’s philosophical theory of explanations, where explaining is an illocutionary (i.e., broad but relevant) act of usefully answering questions. Specifically, we provide the theoretical foundations of Explanatory Artificial Intelligence (YAI), formally defining a user-centred explanatory tool and the space of all possible explanations, or explanatory space, generated by it. We present empirical results in support of our theory, showcasing the implementation of YAI tools and strategies for assessing explainability. To justify and evaluate the proposed theories and models, we considered case studies at the intersection of artificial intelligence and law, particularly European legislation. Our tools helped produce better explanations of software documentation and legal texts for humans and complex regulations for reinforcement learning agents.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In this study a new, fully non-linear, approach to Local Earthquake Tomography is presented. Local Earthquakes Tomography (LET) is a non-linear inversion problem that allows the joint determination of earthquakes parameters and velocity structure from arrival times of waves generated by local sources. Since the early developments of seismic tomography several inversion methods have been developed to solve this problem in a linearized way. In the framework of Monte Carlo sampling, we developed a new code based on the Reversible Jump Markov Chain Monte Carlo sampling method (Rj-McMc). It is a trans-dimensional approach in which the number of unknowns, and thus the model parameterization, is treated as one of the unknowns. I show that our new code allows overcoming major limitations of linearized tomography, opening a new perspective in seismic imaging. Synthetic tests demonstrate that our algorithm is able to produce a robust and reliable tomography without the need to make subjective a-priori assumptions about starting models and parameterization. Moreover it provides a more accurate estimate of uncertainties about the model parameters. Therefore, it is very suitable for investigating the velocity structure in regions that lack of accurate a-priori information. Synthetic tests also reveal that the lack of any regularization constraints allows extracting more information from the observed data and that the velocity structure can be detected also in regions where the density of rays is low and standard linearized codes fails. I also present high-resolution Vp and Vp/Vs models in two widespread investigated regions: the Parkfield segment of the San Andreas Fault (California, USA) and the area around the Alto Tiberina fault (Umbria-Marche, Italy). In both the cases, the models obtained with our code show a substantial improvement in the data fit, if compared with the models obtained from the same data set with the linearized inversion codes.