9 resultados para Learning from one Example
em AMS Tesi di Dottorato - Alm@DL - Università di Bologna
Resumo:
Although errors might foster learning, they can also be perceived as something to avoid if they are associated with negative consequences (e.g., receiving a bad grade or being mocked by classmates). Such adverse perceptions may trigger negative emotions and error-avoidance attitudes, limiting the possibility to use errors for learning. These students’ reactions may be influenced by relational and cultural aspects of errors that characterise the learning environment. Accordingly, the main aim of this research was to investigate whether relational and cultural characteristics associated with errors affect psychological mechanisms triggered by making mistakes. In the theoretical part, we described the role of errors in learning using an integrated multilevel (i.e., psychological, relational, and cultural levels of analysis) approach. Then, we presented three studies that analysed how cultural and relational error-related variables affect psychological aspects. The studies adopted a specific empirical methodology (i.e., qualitative, experimental, and correlational) and investigated different samples (i.e., teachers, primary school pupils and middle school students). Findings of study one (cultural level) highlighted errors acquire different meanings that are associated with different teachers’ error-handling strategies (e.g., supporting or penalising errors). Study two (relational level) demonstrated that teachers’ supportive error-handling strategies promote students’ perceptions of being in a positive error climate. Findings of study three (relational and psychological level) showed that positive error climate foster students’ adaptive reactions towards errors and learning outcomes. Overall, our findings indicated that different variables influence students’ learning from errors process and teachers play an important role in conveying specific meanings of errors during learning activities, dealing with students’ mistakes supportively, and establishing an error-friendly classroom environment.
Resumo:
Information is nowadays a key resource: machine learning and data mining techniques have been developed to extract high-level information from great amounts of data. As most data comes in form of unstructured text in natural languages, research on text mining is currently very active and dealing with practical problems. Among these, text categorization deals with the automatic organization of large quantities of documents in priorly defined taxonomies of topic categories, possibly arranged in large hierarchies. In commonly proposed machine learning approaches, classifiers are automatically trained from pre-labeled documents: they can perform very accurate classification, but often require a consistent training set and notable computational effort. Methods for cross-domain text categorization have been proposed, allowing to leverage a set of labeled documents of one domain to classify those of another one. Most methods use advanced statistical techniques, usually involving tuning of parameters. A first contribution presented here is a method based on nearest centroid classification, where profiles of categories are generated from the known domain and then iteratively adapted to the unknown one. Despite being conceptually simple and having easily tuned parameters, this method achieves state-of-the-art accuracy in most benchmark datasets with fast running times. A second, deeper contribution involves the design of a domain-independent model to distinguish the degree and type of relatedness between arbitrary documents and topics, inferred from the different types of semantic relationships between respective representative words, identified by specific search algorithms. The application of this model is tested on both flat and hierarchical text categorization, where it potentially allows the efficient addition of new categories during classification. Results show that classification accuracy still requires improvements, but models generated from one domain are shown to be effectively able to be reused in a different one.
Resumo:
Although the debate of what data science is has a long history and has not reached a complete consensus yet, Data Science can be summarized as the process of learning from data. Guided by the above vision, this thesis presents two independent data science projects developed in the scope of multidisciplinary applied research. The first part analyzes fluorescence microscopy images typically produced in life science experiments, where the objective is to count how many marked neuronal cells are present in each image. Aiming to automate the task for supporting research in the area, we propose a neural network architecture tuned specifically for this use case, cell ResUnet (c-ResUnet), and discuss the impact of alternative training strategies in overcoming particular challenges of our data. The approach provides good results in terms of both detection and counting, showing performance comparable to the interpretation of human operators. As a meaningful addition, we release the pre-trained model and the Fluorescent Neuronal Cells dataset collecting pixel-level annotations of where neuronal cells are located. In this way, we hope to help future research in the area and foster innovative methodologies for tackling similar problems. The second part deals with the problem of distributed data management in the context of LHC experiments, with a focus on supporting ATLAS operations concerning data transfer failures. In particular, we analyze error messages produced by failed transfers and propose a Machine Learning pipeline that leverages the word2vec language model and K-means clustering. This provides groups of similar errors that are presented to human operators as suggestions of potential issues to investigate. The approach is demonstrated on one full day of data, showing promising ability in understanding the message content and providing meaningful groupings, in line with previously reported incidents by human operators.
Resumo:
Reinforcement Learning (RL) provides a powerful framework to address sequential decision-making problems in which the transition dynamics is unknown or too complex to be represented. The RL approach is based on speculating what is the best decision to make given sample estimates obtained from previous interactions, a recipe that led to several breakthroughs in various domains, ranging from game playing to robotics. Despite their success, current RL methods hardly generalize from one task to another, and achieving the kind of generalization obtained through unsupervised pre-training in non-sequential problems seems unthinkable. Unsupervised RL has recently emerged as a way to improve generalization of RL methods. Just as its non-sequential counterpart, the unsupervised RL framework comprises two phases: An unsupervised pre-training phase, in which the agent interacts with the environment without external feedback, and a supervised fine-tuning phase, in which the agent aims to efficiently solve a task in the same environment by exploiting the knowledge acquired during pre-training. In this thesis, we study unsupervised RL via state entropy maximization, in which the agent makes use of the unsupervised interactions to pre-train a policy that maximizes the entropy of its induced state distribution. First, we provide a theoretical characterization of the learning problem by considering a convex RL formulation that subsumes state entropy maximization. Our analysis shows that maximizing the state entropy in finite trials is inherently harder than RL. Then, we study the state entropy maximization problem from an optimization perspective. Especially, we show that the primal formulation of the corresponding optimization problem can be (approximately) addressed through tractable linear programs. Finally, we provide the first practical methodologies for state entropy maximization in complex domains, both when the pre-training takes place in a single environment as well as multiple environments.
Resumo:
The rational construction of the house. The writings and projects of Giuseppe Pagano Description, themes and research objectives The research aims at analysing the architecture of Giuseppe Pagano, which focuses on the theme of dwelling, through the reading of 3 of his house projects. On the one hand, these projects represent “minor” works not thoroughly known by Pagano’s contemporary critics; on the other they emphasise a particular methodological approach, which serves the author to explore a theme closely linked to his theoretical thought. The house project is a key to Pagano’s research, given its ties to the socio-cultural and political conditions in which the architect was working, so that it becomes a mirror of one of his specific and theoretical path, always in a state of becoming. Pagano understands architecture as a “servant of the human being”, subject to a “utilitarian slavery” since it is a clear, essential and “modest” answer to specific human needs, free from aprioristic aesthetic and formal choices. It is a rational architecture in sensu stricto; it constitutes a perfect synthesis between cause and effect and between function and form. The house needs to accommodate these principles because it is closely intertwined with human needs and intimately linked to a specific place, climatic conditions and technical and economical possibilities. Besides, differently from his public and common masterpieces such as the Palazzo Gualino, the Istituto di Fisica and the Università Commerciale Bocconi, the house projects are representative of a precise project will, which is expressed in a more authentic way, partially freed from political influences and dogmatic preoccupations and, therefore, far from the attempt to research a specific expressive language. I believe that the house project better represents that “ingenuity”, freshness and “sincerity” that Pagano identifies with the minor architecture, thereby revealing a more authentic expression of his understanding of a project. Therefore, the thesis, by tracing the theoretical research of Pagano through the analysis of some of his designed and built works, attempts to identify a specific methodological approach to Pagano’s project, which, developed through time, achieves a certain clarity in the 1930s. In fact, this methodological approach becomes more evident in his last projects, mainly regarding the house and the urban space. These reflect the attempt to respond to the new social needs and, at the same time, they also are an expression of a freer idea of built architecture, closely linked with the place and with the human being who dwells it. The three chosen projects (Villa Colli, La Casa a struttura d’acciaio and Villa Caraccio) make Pagano facing different places, different customers and different economic and technical conditions, which, given the author’s biography, correspond to important historical and political conditions. This is the reason why the projects become apparently distant works, both linguistically and conceptually, to the point that one can define them as ”eclectic”. However, I argue that this eclecticism is actually an added value to the architectural work of Pagano, steaming from the use of a method which, having as a basis the postulate of a rational architecture as essence and logic of building, finds specific variations depending on the multiple variables to be addressed by the project. This is the methodological heritage that Pagano learns from the tradition, especially that of the rural residential architecture, defined by Pagano as a “dictionary of the building logic of man”, as an “a-stylistic background”. For Pagano this traditional architecture is a clear expression of the relationships between a theme and its development, an architectural “fact” that is resolved with purely technical and utilitarian aims and with a spontaneous development far from any aprioristic theoretical principle. Architecture, therefore, cannot be an invention for Pagano and the personal contribution of each architect has to consider his/her close relationship with the specific historical context, place and new building methods. These are basic principles in the methodological approach that drives a great deal of his research and that also permits his thought to be modern. I argue that both ongoing and new collaborations with younger protagonists of the culture and architecture of the period are significant for the development of his methodology. These encounters represent the will to spread his own understanding of the “new architecture” as well as a way of self-renewal by confronting the self with new themes and realities and by learning from his collaborators. Thesis’ outline The thesis is divided in two principal parts, each articulated in four chapters attempting to offer a new reading of the theory and work of Pagano by emphasising the central themes of the research. The first chapter is an introduction to the thesis and to the theme of the rational house, as understood and developed in its typological and technical aspects by Pagano and by other protagonists of the Italian rationalism of the 1930s. Here the attention is on two different aspects defining, according to Pagano, the house project: on the one hand, the typological renewal, aimed at defining a “standard form” as a clear and essential answer to certain needs and variables of the project leading to different formal expressions. On the other, it focuses on the building, understood as a technique to “produce” architecture, where new technologies and new materials are not merely tools but also essential elements of the architectural work. In this way the villa becomes different from the theme of the common house or from that of the minimalist house, by using rules in the choice of material and in the techniques that are every time different depending on the theme under exploration and on the contingency of place. It is also visible the rigorous rationalism that distinguishes the author's appropriation of certain themes of rural architecture. The pages of “Casabella” and the events of the contemporary Triennali form the preliminary material for the writing of this chapter given that they are primary sources to individuate projects and writings produced by Pagano and contemporary architects on this theme. These writings and projects, when compared, reconstruct the evolution of the idea of the rational house and, specifically, of the personal research of Pagano. The second part regards the reading of three of Pagano’s projects of houses as a built verification of his theories. This section constitutes the central part of the thesis since it is aimed at detecting a specific methodological approach showing a theoretical and ideological evolution expressed in the vast edited literature. The three projects that have been chosen explore the theme of the house, looking at various research themes that the author proposes and that find continuity in the affirmation of a specific rationalism, focussed on concepts such as essentiality, utility, functionality and building honesty. These concepts guide the thought and the activities of Pagano, also reflecting a social and cultural period. The projects span from the theme of the villa moderna, Villa Colli, which, inspired by the architecture of North Europe, anticipates a specific rationalism of Pagano based on rigour, simplicity and essentiality, to the theme of the common house, Casa a struttura d’acciaio, la casa del domani, which ponders on the definition of new living spaces and, moreover, on new concepts of standardisation, economical efficiency and new materials responding to the changing needs of the modern society. Finally, the third project returns to the theme of the, Villa Caraccio, revisiting it with new perspectives. These perspectives find in the solution of the open plant, in the openness to nature and landscape and in the revisiting of materials and local building systems that idea of the freed house, which express clearly a new theoretical thought. Methodology It needs to be noted that due to the lack of an official Archive of Pagano’s work, the analysis of his work has been difficult and this explains the necessity to read the articles and the drawings published in the pages of «Casabella» and «Domus». As for the projects of Villa Colli and Casa a struttura d’acciaio, parts of the original drawings have been consulted. These drawings are not published and are kept in private archives of the collaborators of Pagano. The consultation of these documents has permitted the analysis of the cited works, which have been subject to a more complete reading following the different proposed solutions, which have permitted to understand the project path. The projects are analysed thought the method of comparison and critical reading which, specifically, means graphical elaborations and analytical schemes, mostly reconstructed on the basis of original projects but, where possible, also on a photographic investigation. The focus is on the project theme which, beginning with a specific living (dwelling) typology, finds variations because of the historico-political context in which Pagano is embedded and which partially shapes his research and theoretical thought, then translated in the built work. The analysis of the work follows, beginning, where possible, from a reconstruction of the evolution of the project as elaborated on the basis of the original documents and ending on an analysis of the constructive principles and composition. This second phase employs a methodology proposed by Pagano in his article Piante di ville, which, as expected, focuses on the plant as essential tool to identify the “true practical and poetic qualities of the construction”(Pagano, «Costruzioni-Casabella», 1940, p. 2). The reading of the project is integrated with the constructive analyses related to the technical aspects of the house which, in the case of Casa a struttura d’acciaio, play an important role in the project, while in Villa Colli and in Villa Caraccio are principally linked to the choice of materials for the construction of the different architectural elements. These are nonetheless key factors in the composition of the work. Future work could extend this reading to other house projects to deepen the research that could be completed with the consultation of Archival materials, which are missing at present. Finally, in the appendix I present a critical selection of the Pagano’s writings, which recall the themes discussed and embodied by the three projects. The texts have been selected among the articles published in Casabella and in other journals, completing the reading of the project work which cannot be detached from his theoretical thought. Moving from theory to project, we follow a path that brings us to define and deepen the central theme of the thesis: rational building as the principal feature of the architectural research of Pagano, which is paraphrased in multiple ways in his designed and built works.
Resumo:
During the PhD program in chemistry at the University of Bologna, the environmental sustainability of some industrial processes was studied through the application of the LCA methodology. The efforts were focused on the study of processes under development, in order to assess their environmental impacts to guide their transfer on an industrial scale. Processes that could meet the principles of Green Chemistry have been selected and their environmental benefits have been evaluated through a holistic approach. The use of renewable sources was assessed through the study of terephthalic acid production from biomass (which showed that only the use of waste can provide an environmental benefit) and a new process for biogas upgrading (whose potential is to act as a carbon capture technology). Furthermore, the basis for the development of a new methodology for the prediction of the environmental impact of ionic liquids has been laid. It has already shown good qualities in identifying impact trends, but further research on it is needed to obtain a more reliable and usable model. In the context of sustainable development that will not only be sector-specific, the environmental performance of some processes linked to the primary production sector has also been evaluated. The impacts of some organic farming practices in the wine production were analysed, the use of the Cereal Unit parameter was proposed as a functional unit for the comparison of different crop rotations, and the carbon footprint of school canteen meals was calculated. The results of the analyses confirm that sustainability in the industrial production sector should be assessed from a life cycle perspective, in order to consider all the flows involved during the different phases. In particular, it is necessary that environmental assessments adopt a cradle-to-gate approach, to avoid shifting the environmental burden from one phase to another.
Resumo:
Since the first subdivisions of the brain into macro regions, it has always been thought a priori that, given the heterogeneity of neurons, different areas host specific functions and process unique information in order to generate a behaviour. Moreover, the various sensory inputs coming from different sources (eye, skin, proprioception) flow from one macro area to another, being constantly computed and updated. Therefore, especially for non-contiguous cortical areas, it is not expected to find the same information. From this point of view, it would be inconceivable that the motor and the parietal cortices, diversified by the information encoded and by the anatomical position in the brain, could show very similar neural dynamics. With the present thesis, by analyzing the population activity of parietal areas V6A and PEc with machine learning methods, we argue that a simplified view of the brain organization do not reflect the actual neural processes. We reliably detected a number of neural states that were tightly linked to distinct periods of the task sequence, i.e. the planning and execution of movement and the holding of target as already observed in motor cortices. The states before and after the movement could be further segmented into two states related to different stages of movement planning and arm posture processing. Rather unexpectedly, we found that activity during the movement could be parsed into two states of equal duration temporally linked to the acceleration and deceleration phases of the arm. Our findings suggest that, at least during arm reaching in 3D space, the posterior parietal cortex (PPC) shows low-level population neural dynamics remarkably similar to those found in the motor cortices. In addition, the present findings suggest that computational processes in PPC could be better understood if studied using a dynamical system approach rather than studying a mosaic of single units.
Resumo:
Non-linear effects are responsible for peculiar phenomena in charged particles dynamics in circular accelerators. Recently, they have been used to propose novel beam manipulations where one can modify the transverse beam distribution in a controlled way, to fulfil the constraints posed by new applications. One example is the resonant beam splitting used at CERN for the Multi-Turn Extraction (MTE), to transfer proton beams from PS to SPS. The theoretical description of these effects relies on the formulation of the particle's dynamics in terms of Hamiltonian systems and symplectic maps, and on the theory of adiabatic invariance and resonant separatrix crossing. Close to resonance, new stable regions and new separatrices appear in the phase space. As non-linear effects do not preserve the Courant-Snyder invariant, it is possible for a particle to cross a separatrix, changing the value of its adiabatic invariant. This process opens the path to new beam manipulations. This thesis deals with various possible effects that can be used to shape the transverse beam dynamics, using 2D and 4D models of particles' motion. We show the possibility of splitting a beam using a resonant external exciter, or combining its action with MTE-like tune modulation close to resonance. Non-linear effects can also be used to cool a beam acting on its transverse beam distribution. We discuss the case of an annular beam distribution, showing that emittance can be reduced modulating amplitude and frequency of a resonant oscillating dipole. We then consider 4D models where, close to resonance, motion in the two transverse planes is coupled. This is exploited to operate on the transverse emittances with a 2D resonance crossing. Depending on the resonance, the result is an emittance exchange between the two planes, or an emittance sharing. These phenomena are described and understood in terms of adiabatic invariance theory.
Resumo:
FinTech (financial technology, ‘‘FinTech’’) is a double-edged sword as it brings both benefits and risks. This study appraised FinTech’s technological nature that brings changes in complexity in modern financial markets to identify the information deficits and its undesirable outcomes. Besides, as FinTech is still developing, the information regarding, for instance, whether and how to apply regulation may be insufficient for both regulators and those regulated. More one-size-fits-all regulation might accordingly be adopted, thereby resulting in the adverse selection. Through the lens of both law and economics and law and technology, this study suggested AFR (adaptive financial regulation, ‘‘AFR’’) of FinTech to solve the underlying pacing issue. AFR is dynamic, enabling regulatory adjustments and learning. Exploring and collecting information through experiments and learning from experiments are the core of AFR. FinTech regulatory sandboxes epitomize AFR. This study chose Taiwan as a case study. This study found several barriers to adaptive and effective FinTech regulation. Unduly emphasizing consumer protection and the innovation entry criterion by improperly imposing limits on the entry into sandboxes, ignoring post-sandbox mechanisms, and relying on detailed, specific and prescriptive rules to formulate sandboxes are examples. To solve these barriers, this study proposed several solutions by looking into the experiences in other jurisdictions and analyzing. First, striking a balance between encouraging innovation and ensuring financial stability and consumer protection is indispensable. Second, entry to sandboxes should be facilitated by improving the selection criteria. Third, adhering to realizing regulatory adjustment and learning to adapt regulation to technology, this study argued that systematic post-sandbox mechanisms should be established. Fourth, this study recommended “more principles-based sandboxes”. Principles rather than rules should be the base on which sandboxes or FinTech regulation are established. Having principles could provide more flexibility, being easier to adjust and adapt, and better at avoiding.