8 resultados para User profiles
em AMS Tesi di Dottorato - Alm@DL - Università di Bologna
Resumo:
Scopo del presente lavoro di ricerca è quello di comparare due contesti metropolitani, valenciano e bolognese, sulle pratiche di accompagnamento al lavoro rivolte a fasce svantaggiate, in particolare persone con problemi di dipendenza da sostanze psicotrope. L’indagine propone un confronto su alcune tematiche trasversali (tipologia di azioni messe in campo, organizzazione territoriale e governance, profilo degli utenti, inserimento sociale, coinvolgimento del mondo produttivo) e pone in evidenza gli elementi che ci consentono di individuare e segnalare sia delle buone pratiche trasferibili sia delle linee progettuali, partendo dunque dal presupposto che capacitare una persona significa innanzitutto offrirle congrue opportunità di scelta, nel senso seniano e come spiegato dalla stessa Nussbaum, ma soprattutto accompagnarla e sostenerla nel percorso di inserimento lavorativo e, in parallelo, sociale. Il bisogno raccolto è quello di un sostegno, motivazionale e orientativo, che segua un approccio socio educativo capace di fornire, alla persona, una risposta integrata, di unicità, capace dunque di agire sull’autonomia, sull’autostima, sull’elaborazione delle proprie esperienze di vita e lavorative, nonché su elementi anche di contesto quali la casa, le reti amicali e familiari, spesso compromesse. L’elemento distintivo che consente di agire in questa direzione è il lavoro di collaborazione tra i diversi servizi e la co-progettazione del percorso con l’utente stesso. Il tema degli inserimenti lavorativi è un argomento molto complesso che chiama in causa diversi aspetti: i mutamenti sociali e le trasformazioni del lavoro; l’emergere di nuove fasce deboli e il rischio di aggravamento delle condizioni di esclusione per le fasce deboli “tradizionali”; l’importanza del lavoro per la costruzione di percorsi identitari e di riconoscimento; l’impatto delle politiche attive sulle fasce svantaggiate e i concetti di capitazione e attivazione; il ruolo del capitale sociale e l’emergere di nuovi welfare; la rete degli attori coinvolti dal processo di inserimento e il tema della governace territoriale.
Resumo:
Matita (that means pencil in Italian) is a new interactive theorem prover under development at the University of Bologna. When compared with state-of-the-art proof assistants, Matita presents both traditional and innovative aspects. The underlying calculus of the system, namely the Calculus of (Co)Inductive Constructions (CIC for short), is well-known and is used as the basis of another mainstream proof assistant—Coq—with which Matita is to some extent compatible. In the same spirit of several other systems, proof authoring is conducted by the user as a goal directed proof search, using a script for storing textual commands for the system. In the tradition of LCF, the proof language of Matita is procedural and relies on tactic and tacticals to proceed toward proof completion. The interaction paradigm offered to the user is based on the script management technique at the basis of the popularity of the Proof General generic interface for interactive theorem provers: while editing a script the user can move forth the execution point to deliver commands to the system, or back to retract (or “undo”) past commands. Matita has been developed from scratch in the past 8 years by several members of the Helm research group, this thesis author is one of such members. Matita is now a full-fledged proof assistant with a library of about 1.000 concepts. Several innovative solutions spun-off from this development effort. This thesis is about the design and implementation of some of those solutions, in particular those relevant for the topic of user interaction with theorem provers, and of which this thesis author was a major contributor. Joint work with other members of the research group is pointed out where needed. The main topics discussed in this thesis are briefly summarized below. Disambiguation. Most activities connected with interactive proving require the user to input mathematical formulae. Being mathematical notation ambiguous, parsing formulae typeset as mathematicians like to write down on paper is a challenging task; a challenge neglected by several theorem provers which usually prefer to fix an unambiguous input syntax. Exploiting features of the underlying calculus, Matita offers an efficient disambiguation engine which permit to type formulae in the familiar mathematical notation. Step-by-step tacticals. Tacticals are higher-order constructs used in proof scripts to combine tactics together. With tacticals scripts can be made shorter, readable, and more resilient to changes. Unfortunately they are de facto incompatible with state-of-the-art user interfaces based on script management. Such interfaces indeed do not permit to position the execution point inside complex tacticals, thus introducing a trade-off between the usefulness of structuring scripts and a tedious big step execution behavior during script replaying. In Matita we break this trade-off with tinycals: an alternative to a subset of LCF tacticals which can be evaluated in a more fine-grained manner. Extensible yet meaningful notation. Proof assistant users often face the need of creating new mathematical notation in order to ease the use of new concepts. The framework used in Matita for dealing with extensible notation both accounts for high quality bidimensional rendering of formulae (with the expressivity of MathMLPresentation) and provides meaningful notation, where presentational fragments are kept synchronized with semantic representation of terms. Using our approach interoperability with other systems can be achieved at the content level, and direct manipulation of formulae acting on their rendered forms is possible too. Publish/subscribe hints. Automation plays an important role in interactive proving as users like to delegate tedious proving sub-tasks to decision procedures or external reasoners. Exploiting the Web-friendliness of Matita we experimented with a broker and a network of web services (called tutors) which can try independently to complete open sub-goals of a proof, currently being authored in Matita. The user receives hints from the tutors on how to complete sub-goals and can interactively or automatically apply them to the current proof. Another innovative aspect of Matita, only marginally touched by this thesis, is the embedded content-based search engine Whelp which is exploited to various ends, from automatic theorem proving to avoiding duplicate work for the user. We also discuss the (potential) reusability in other systems of the widgets presented in this thesis and how we envisage the evolution of user interfaces for interactive theorem provers in the Web 2.0 era.
Resumo:
Interactive theorem provers (ITP for short) are tools whose final aim is to certify proofs written by human beings. To reach that objective they have to fill the gap between the high level language used by humans for communicating and reasoning about mathematics and the lower level language that a machine is able to “understand” and process. The user perceives this gap in terms of missing features or inefficiencies. The developer tries to accommodate the user requests without increasing the already high complexity of these applications. We believe that satisfactory solutions can only come from a strong synergy between users and developers. We devoted most part of our PHD designing and developing the Matita interactive theorem prover. The software was born in the computer science department of the University of Bologna as the result of composing together all the technologies developed by the HELM team (to which we belong) for the MoWGLI project. The MoWGLI project aimed at giving accessibility through the web to the libraries of formalised mathematics of various interactive theorem provers, taking Coq as the main test case. The motivations for giving life to a new ITP are: • study the architecture of these tools, with the aim of understanding the source of their complexity • exploit such a knowledge to experiment new solutions that, for backward compatibility reasons, would be hard (if not impossible) to test on a widely used system like Coq. Matita is based on the Curry-Howard isomorphism, adopting the Calculus of Inductive Constructions (CIC) as its logical foundation. Proof objects are thus, at some extent, compatible with the ones produced with the Coq ITP, that is itself able to import and process the ones generated using Matita. Although the systems have a lot in common, they share no code at all, and even most of the algorithmic solutions are different. The thesis is composed of two parts where we respectively describe our experience as a user and a developer of interactive provers. In particular, the first part is based on two different formalisation experiences: • our internship in the Mathematical Components team (INRIA), that is formalising the finite group theory required to attack the Feit Thompson Theorem. To tackle this result, giving an effective classification of finite groups of odd order, the team adopts the SSReflect Coq extension, developed by Georges Gonthier for the proof of the four colours theorem. • our collaboration at the D.A.M.A. Project, whose goal is the formalisation of abstract measure theory in Matita leading to a constructive proof of Lebesgue’s Dominated Convergence Theorem. The most notable issues we faced, analysed in this part of the thesis, are the following: the difficulties arising when using “black box” automation in large formalisations; the impossibility for a user (especially a newcomer) to master the context of a library of already formalised results; the uncomfortable big step execution of proof commands historically adopted in ITPs; the difficult encoding of mathematical structures with a notion of inheritance in a type theory without subtyping like CIC. In the second part of the manuscript many of these issues will be analysed with the looking glasses of an ITP developer, describing the solutions we adopted in the implementation of Matita to solve these problems: integrated searching facilities to assist the user in handling large libraries of formalised results; a small step execution semantic for proof commands; a flexible implementation of coercive subtyping allowing multiple inheritance with shared substructures; automatic tactics, integrated with the searching facilities, that generates proof commands (and not only proof objects, usually kept hidden to the user) one of which specifically designed to be user driven.
Resumo:
Nuclear Magnetic Resonance (NMR) is a branch of spectroscopy that is based on the fact that many atomic nuclei may be oriented by a strong magnetic field and will absorb radiofrequency radiation at characteristic frequencies. The parameters that can be measured on the resulting spectral lines (line positions, intensities, line widths, multiplicities and transients in time-dependent experi-ments) can be interpreted in terms of molecular structure, conformation, molecular motion and other rate processes. In this way, high resolution (HR) NMR allows performing qualitative and quantitative analysis of samples in solution, in order to determine the structure of molecules in solution and not only. In the past, high-field NMR spectroscopy has mainly concerned with the elucidation of chemical structure in solution, but today is emerging as a powerful exploratory tool for probing biochemical and physical processes. It represents a versatile tool for the analysis of foods. In literature many NMR studies have been reported on different type of food such as wine, olive oil, coffee, fruit juices, milk, meat, egg, starch granules, flour, etc using different NMR techniques. Traditionally, univariate analytical methods have been used to ex-plore spectroscopic data. This method is useful to measure or to se-lect a single descriptive variable from the whole spectrum and , at the end, only this variable is analyzed. This univariate methods ap-proach, applied to HR-NMR data, lead to different problems due especially to the complexity of an NMR spectrum. In fact, the lat-ter is composed of different signals belonging to different mole-cules, but it is also true that the same molecules can be represented by different signals, generally strongly correlated. The univariate methods, in this case, takes in account only one or a few variables, causing a loss of information. Thus, when dealing with complex samples like foodstuff, univariate analysis of spectra data results not enough powerful. Spectra need to be considered in their wholeness and, for analysing them, it must be taken in consideration the whole data matrix: chemometric methods are designed to treat such multivariate data. Multivariate data analysis is used for a number of distinct, differ-ent purposes and the aims can be divided into three main groups: • data description (explorative data structure modelling of any ge-neric n-dimensional data matrix, PCA for example); • regression and prediction (PLS); • classification and prediction of class belongings for new samples (LDA and PLS-DA and ECVA). The aim of this PhD thesis was to verify the possibility of identify-ing and classifying plants or foodstuffs, in different classes, based on the concerted variation in metabolite levels, detected by NMR spectra and using the multivariate data analysis as a tool to inter-pret NMR information. It is important to underline that the results obtained are useful to point out the metabolic consequences of a specific modification on foodstuffs, avoiding the use of a targeted analysis for the different metabolites. The data analysis is performed by applying chemomet-ric multivariate techniques to the NMR dataset of spectra acquired. The research work presented in this thesis is the result of a three years PhD study. This thesis reports the main results obtained from these two main activities: A1) Evaluation of a data pre-processing system in order to mini-mize unwanted sources of variations, due to different instrumental set up, manual spectra processing and to sample preparations arte-facts; A2) Application of multivariate chemiometric models in data analy-sis.
Resumo:
This Phd thesis was entirely developed at the Telescopio Nazionale Galileo (TNG, Roque de los Muchachos, La Palma Canary Islands) with the aim of designing, developing and implementing a new Graphical User Interface (GUI) for the Near Infrared Camera Spectrometer (NICS) installed on the Nasmyth A of the telescope. The idea of a new GUI for NICS has risen for optimizing the astronomers work through a set of powerful tools not present in the existing GUI, such as the possibility to move automatically, an object on the slit or do a very preliminary images analysis and spectra extraction. The new GUI also provides a wide and versatile image display, an automatic procedure to find out the astronomical objects and a facility for the automatic image crosstalk correction. In order to test the overall correct functioning of the new GUI for NICS, and providing some information on the atmospheric extinction at the TNG site, two telluric standard stars have been spectroscopically observed within some engineering time, namely Hip031303 and Hip031567. The used NICS set-up is as follows: Large Field (0.25'' /pixel) mode, 0.5'' slit and spectral dispersion through the AMICI prism (R~100), and the higher resolution (R~1000) JH and HK grisms.
Resumo:
The activity of the Ph.D. student Juri Luca De Coi involved the research field of policy languages and can be divided in three parts. The first part of the Ph.D. work investigated the state of the art in policy languages, ending up with: (i) identifying the requirements up-to-date policy languages have to fulfill; (ii) defining a policy language able to fulfill such requirements (namely, the Protune policy language); and (iii) implementing an infrastructure able to enforce policies expressed in the Protune policy language. The second part of the Ph.D. work focused on simplifying the activity of defining policies and ended up with: (i) identifying a subset of the controlled natural language ACE to express Protune policies; (ii) implementing a mapping between ACE policies and Protune policies; and (iii) adapting the ACE Editor to guide users step by step when defining ACE policies. The third part of the Ph.D. work tested the feasibility of the chosen approach by applying it to meaningful real-world problems, among which: (i) development of a security layer on top of RDF stores; and (ii) efficient policy-aware access to metadata stores. The research activity has been performed in tight collaboration with the Leibniz Universität Hannover and further European partners within the projects REWERSE, TENCompetence and OKKAM.
Resumo:
This doctoral thesis focuses on ground-based measurements of stratospheric nitric acid (HNO3)concentrations obtained by means of the Ground-Based Millimeter-wave Spectrometer (GBMS). Pressure broadened HNO3 emission spectra are analyzed using a new inversion algorithm developed as part of this thesis work and the retrieved vertical profiles are extensively compared to satellite-based data. This comparison effort I carried out has a key role in establishing a long-term (1991-2010), global data record of stratospheric HNO3, with an expected impact on studies concerning ozone decline and recovery. The first part of this work is focused on the development of an ad hoc version of the Optimal Estimation Method (Rodgers, 2000) in order to retrieve HNO3 spectra observed by means of GBMS. I also performed a comparison between HNO3 vertical profiles retrieved with the OEM and those obtained with the old iterative Matrix Inversion method. Results show no significant differences in retrieved profiles and error estimates, with the OEM providing however additional information needed to better characterize the retrievals. A final section of this first part of the work is dedicated to a brief review on the application of the OEM to other trace gases observed by GBMS, namely O3 and N2O. The second part of this study deals with the validation of HNO3 profiles obtained with the new inversion method. The first step has been the validation of GBMS measurements of tropospheric opacity, which is a necessary tool in the calibration of any GBMS spectra. This was achieved by means of comparisons among correlative measurements of water vapor column content (or Precipitable Water Vapor, PWV) since, in the spectral region observed by GBMS, the tropospheric opacity is almost entirely due to water vapor absorption. In particular, I compared GBMS PWV measurements collected during the primary field campaign of the ECOWAR project (Bhawar et al., 2008) with simultaneous PWV observations obtained with Vaisala RS92k radiosondes, a Raman lidar, and an IR Fourier transform spectrometer. I found that GBMS PWV measurements are in good agreement with the other three data sets exhibiting a mean difference between observations of ~9%. After this initial validation, GBMS HNO3 retrievals have been compared to two sets of satellite data produced by the two NASA/JPL Microwave Limb Sounder (MLS) experiments (aboard the Upper Atmosphere Research Satellite (UARS) from 1991 to 1999, and on the Earth Observing System (EOS) Aura mission from 2004 to date). This part of my thesis is inserted in GOZCARDS (Global Ozone Chemistry and Related Trace gas Data Records for the Stratosphere), a multi-year project, aimed at developing a long-term data record of stratospheric constituents relevant to the issues of ozone decline and expected recovery. This data record will be based mainly on satellite-derived measurements but ground-based observations will be pivotal for assessing offsets between satellite data sets. Since the GBMS has been operated for more than 15 years, its nitric acid data record offers a unique opportunity for cross-calibrating HNO3 measurements from the two MLS experiments. I compare GBMS HNO3 measurements obtained from the Italian Alpine station of Testa Grigia (45.9° N, 7.7° E, elev. 3500 m), during the period February 2004 - March 2007, and from Thule Air Base, Greenland (76.5°N 68.8°W), during polar winter 2008/09, and Aura MLS observations. A similar intercomparison is made between UARS MLS HNO3 measurements with those carried out from the GBMS at South Pole, Antarctica (90°S), during the most part of 1993 and 1995. I assess systematic differences between GBMS and both UARS and Aura HNO3 data sets at seven potential temperature levels. Results show that, except for measurements carried out at Thule, ground based and satellite data sets are consistent within the errors, at all potential temperature levels.
Resumo:
Biomedical analyses are becoming increasingly complex, with respect to both the type of the data to be produced and the procedures to be executed. This trend is expected to continue in the future. The development of information and protocol management systems that can sustain this challenge is therefore becoming an essential enabling factor for all actors in the field. The use of custom-built solutions that require the biology domain expert to acquire or procure software engineering expertise in the development of the laboratory infrastructure is not fully satisfactory because it incurs undesirable mutual knowledge dependencies between the two camps. We propose instead an infrastructure concept that enables the domain experts to express laboratory protocols using proper domain knowledge, free from the incidence and mediation of the software implementation artefacts. In the system that we propose this is made possible by basing the modelling language on an authoritative domain specific ontology and then using modern model-driven architecture technology to transform the user models in software artefacts ready for execution in a multi-agent based execution platform specialized for biomedical laboratories.