13 resultados para Emails categorization
em AMS Tesi di Dottorato - Alm@DL - Università di Bologna
Resumo:
According to much evidence, observing objects activates two types of information: structural properties, i.e., the visual information about the structural features of objects, and function knowledge, i.e., the conceptual information about their skilful use. Many studies so far have focused on the role played by these two kinds of information during object recognition and on their neural underpinnings. However, to the best of our knowledge no study so far has focused on the different activation of this information (structural vs. function) during object manipulation and conceptualization, depending on the age of participants and on the level of object familiarity (familiar vs. non-familiar). Therefore, the main aim of this dissertation was to investigate how actions and concepts related to familiar and non-familiar objects may vary across development. To pursue this aim, four studies were carried out. A first study led to the creation of the Familiar and Non-Familiar Stimuli Database, a set of everyday objects classified by Italian pre-schoolers, schoolers, and adults, useful to verify how object knowledge is modulated by age and frequency of use. A parallel study demonstrated that factors such as sociocultural dynamics may affect the perception of objects. Specifically, data for familiarity, naming, function, using and frequency of use of the objects used to create the Familiar And Non-Familiar Stimuli Database were collected with Dutch and Croatian children and adults. The last two studies on object interaction and language provide further evidence in support of the literature on affordances and on the link between affordances and the cognitive process of language from a developmental point of view, supporting the perspective of a situated cognition and emphasizing the crucial role of human experience.
Resumo:
Management and organization literature has extensively noticed the crucial role that improvisation assumes in organizations, both as a learning process (Miner, Bassoff & Moorman, 2001), a creative process (Fisher & Amabile, 2008), a capability (Vera & Crossan, 2005), and a personal disposition (Hmielesky & Corbett, 2006; 2008). My dissertation aims to contribute to the existing literature on improvisation, addressing two general research questions: 1) How does improvisation unfold at an individual level? 2) What are the potential antecedents and consequences of individual proclivity to improvise? This dissertation is based on a mixed methodology that allowed me to deal with these two general research questions and enabled a constant interaction between the theoretical framework and the empirical results. The selected empirical field is haute cuisine and the respondents are the executive chefs of the restaurants awarded by Michelin Guide in 2010 in Italy. The qualitative section of the dissertation is based on the analysis of 26 inductive case studies and offers a multifaceted contribution. First, I describe how improvisation works both as a learning and creative process. Second, I introduce a new categorization of individual improvisational scenarios (demanded creative improvisation, problem solving improvisation, and pure creative improvisation). Third, I describe the differences between improvisation and other creative processes detected in the field (experimentation, brainstorming, trial and error through analytical procedure, trial and error, and imagination). The quantitative inquiry is founded on a Structural Equation Model, which allowed me to test simultaneously the relationships between proclivity to improvise and its antecedents and consequences. In particular, using a newly developed scale to measure individual proclivity to improvise, I test the positive influence of industry experience, self-efficacy, and age on proclivity to improvise and the negative impact of proclivity to improvise on outcome deviation. Theoretical contributions and practical implications of the results are discussed.
Resumo:
Visual tracking is the problem of estimating some variables related to a target given a video sequence depicting the target. Visual tracking is key to the automation of many tasks, such as visual surveillance, robot or vehicle autonomous navigation, automatic video indexing in multimedia databases. Despite many years of research, long term tracking in real world scenarios for generic targets is still unaccomplished. The main contribution of this thesis is the definition of effective algorithms that can foster a general solution to visual tracking by letting the tracker adapt to mutating working conditions. In particular, we propose to adapt two crucial components of visual trackers: the transition model and the appearance model. The less general but widespread case of tracking from a static camera is also considered and a novel change detection algorithm robust to sudden illumination changes is proposed. Based on this, a principled adaptive framework to model the interaction between Bayesian change detection and recursive Bayesian trackers is introduced. Finally, the problem of automatic tracker initialization is considered. In particular, a novel solution for categorization of 3D data is presented. The novel category recognition algorithm is based on a novel 3D descriptors that is shown to achieve state of the art performances in several applications of surface matching.
Resumo:
Resumo:
Atmospheric aerosol particles directly impact air quality and participate in controlling the climate system. Organic Aerosol (OA) in general accounts for a large fraction (10–90%) of the global submicron (PM1) particulate mass. Chemometric methods for source identification are used in many disciplines, but methods relying on the analysis of NMR datasets are rarely used in atmospheric sciences. This thesis provides an original application of NMR-based chemometric methods to atmospheric OA source apportionment. The method was tested on chemical composition databases obtained from samples collected at different environments in Europe, hence exploring the impact of a great diversity of natural and anthropogenic sources. We focused on sources of water-soluble OA (WSOA), for which NMR analysis provides substantial advantages compared to alternative methods. Different factor analysis techniques are applied independently to NMR datasets from nine field campaigns of the project EUCAARI and allowed the identification of recurrent source contributions to WSOA in European background troposphere: 1) Marine SOA; 2) Aliphatic amines from ground sources (agricultural activities, etc.); 3) Biomass burning POA; 4) Biogenic SOA from terpene oxidation; 5) “Aged” SOAs, including humic-like substances (HULIS); 6) Other factors possibly including contributions from Primary Biological Aerosol Particles, and products of cooking activities. Biomass burning POA accounted for more than 50% of WSOC in winter months. Aged SOA associated with HULIS was predominant (> 75%) during the spring-summer, suggesting that secondary sources and transboundary transport become more important in spring and summer. Complex aerosol measurements carried out, involving several foreign research groups, provided the opportunity to compare source apportionment results obtained by NMR analysis with those provided by more widespread Aerodyne aerosol mass spectrometers (AMS) techniques that now provided categorization schemes of OA which are becoming a standard for atmospheric chemists. Results emerging from this thesis partly confirm AMS classification and partly challenge it.
Resumo:
The present thesis addresses several experimental questions regarding the nature of the processes underlying the larger centro-parietal Late Positive Potential (LPP) measured during the viewing of emotional(both pleasant and unpleasant) compared to neutral pictures. During a passive viewing condition, this modulatory difference is significantly reduced with picture repetition, but it does not completely habituate even after a massive repetition of the same picture exemplar. In order to investigate the obligatory nature of the affective modulation of the LPP, in Study 1 we introduced a competing task during repetitive exposure of affective pictures. Picture repetition occurred in a passive viewing context or during a categorization task, in which pictures depicting any mean of transportation were presented as targets, and repeated pictures (affectively engaging images) served as distractor stimuli. Results indicated that the impact of repetition on the LPP affective modulation was very similar between the passive and the task contexts, indicating that the affective processing of visual stimuli reflects an obligatory process that occurs despite participants were engaged in a categorization task. In study 2 we assessed whether the decrease of the LPP affective modulation persists over time, by presenting in day 2 the same set of pictures that were massively repeated in day 1. Results indicated that the reduction of the emotional modulation of the LPP to repeated pictures persisted even after 1-day interval, suggesting a contribution of long-term memory processes on the affective habituation of the LPP. Taken together, the data provide new information regarding the processes underlying the affective modulation of the late positive potential.
Resumo:
Information is nowadays a key resource: machine learning and data mining techniques have been developed to extract high-level information from great amounts of data. As most data comes in form of unstructured text in natural languages, research on text mining is currently very active and dealing with practical problems. Among these, text categorization deals with the automatic organization of large quantities of documents in priorly defined taxonomies of topic categories, possibly arranged in large hierarchies. In commonly proposed machine learning approaches, classifiers are automatically trained from pre-labeled documents: they can perform very accurate classification, but often require a consistent training set and notable computational effort. Methods for cross-domain text categorization have been proposed, allowing to leverage a set of labeled documents of one domain to classify those of another one. Most methods use advanced statistical techniques, usually involving tuning of parameters. A first contribution presented here is a method based on nearest centroid classification, where profiles of categories are generated from the known domain and then iteratively adapted to the unknown one. Despite being conceptually simple and having easily tuned parameters, this method achieves state-of-the-art accuracy in most benchmark datasets with fast running times. A second, deeper contribution involves the design of a domain-independent model to distinguish the degree and type of relatedness between arbitrary documents and topics, inferred from the different types of semantic relationships between respective representative words, identified by specific search algorithms. The application of this model is tested on both flat and hierarchical text categorization, where it potentially allows the efficient addition of new categories during classification. Results show that classification accuracy still requires improvements, but models generated from one domain are shown to be effectively able to be reused in a different one.
Resumo:
This study concerns teachers’ use of digital technologies in student assessment, and how the learning that is developed through the use of technology in mathematics can be evaluated. Nowadays math teachers use digital technologies in their teaching, but not in student assessment. The activities carried out with technology are seen as ‘extra-curricular’ (by both teachers and students), thus students do not learn what they can do in mathematics with digital technologies. I was interested in knowing the reasons teachers do not use digital technology to assess students’ competencies, and what they would need to be able to design innovative and appropriate tasks to assess students’ learning through digital technology. This dissertation is built on two main components: teachers and task design. I analyze teachers’ practices involving digital technologies with Ruthven’s Structuring Features of Classroom Practice, and what relation these practices have to the types of assessment they use. I study the kinds of assessment tasks teachers design with a DGE (Dynamic Geometry Environment), using Laborde’s categorization of DGE tasks. I consider the competencies teachers aim to assess with these tasks, and how their goals relate to the learning outcomes of the curriculum. This study also develops new directions in finding how to design suitable tasks for student mathematical assessment in a DGE, and it is driven by the desire to know what kinds of questions teachers might be more interested in using. I investigate the kinds of technology-based assessment tasks teachers value, and the type of feedback they give to students. Finally, I point out that the curriculum should include a range of mathematical and technological competencies that involve the use of digital technologies in mathematics, and I evaluate the possibility to take advantage of technology feedback to allow students to continue learning while they are taking a test.
Resumo:
At the beginning, this Ph.D. project led to an overview of the most common and emerging types of fraud and possible countermeasures in the olive oil sector. Furthermore, possible weaknesses in the current conformity check system for olive oil were highlighted. Among those, despite the organoleptic assessment is a fundamental tool for establishing the virgin olive oils (VOOs) quality grade, the scientific community has evidenced some drawbacks in it. In particular, the application of instrumental screening methods to support the panel test could reduce the work of sensory panels and the cost of this analysis (e.g. for industries, distributors, public and private control laboratories), permitting the increase in the number and the efficiency of the controls. On this basis, a research line called “Quantitative Panel Test” is one of the main expected outcomes of the OLEUM project that is also partially discussed in this doctoral dissertation. In this framework, analytical activities were carried out, within this PhD project, aimed to develop and validate analytical protocols for the study of the profiles in volatile compounds (VOCs) of the VOOs headspace. Specifically, two chromatographic approaches, one targeted and one semi-targeted, to determine VOCs were investigated in this doctoral thesis. The obtained results, will allow the possible establishment of concentration limits and ranges of selected volatile markers, as related to fruitiness and defects, with the aim to support the panel test in the commercial categorization of VOOs. In parallel, a rapid instrumental screening method based on the analysis of VOCs has been investigated to assist the panel test through a fast pre-classification of VOOs samples based on a known level of probability, thus increasing the efficiency of quality control.
Resumo:
On May 25, 2018, the EU introduced the General Data Protection Regulation (GDPR) that offers EU citizens a shelter for their personal information by requesting companies to explain how people’s information is used clearly. To comply with the new law, European and non-European companies interacting with EU citizens undertook a massive data re-permission-request campaign. However, if on the one side the EU Regulator was particularly specific in defining the conditions to get customers’ data access, on the other side, it did not specify how the communication between firms and consumers should be designed. This has left firms free to develop their re-permission emails as they liked, plausibly coupling the informative nature of these privacy-related communications with other persuasive techniques to maximize data disclosure. Consequently, we took advantage of this colossal wave of simultaneous requests to provide insights into two issues. Firstly, we investigate how companies across industries and countries chose to frame their requests. Secondly, we investigate which are the factors that influenced the selection of alternative re-permission formats. In order to achieve these goals, we examine the content of a sample of 1506 re-permission emails sent by 1396 firms worldwide, and we identify the dominant “themes” characterizing these emails. We then relate these themes to both the expected benefits firms may derive from data usage and the possible risks they may experience from not being completely compliant to the spirit of the law. Our results show that: (1) most firms enriched their re-permission messages with persuasive arguments aiming at increasing consumers’ likelihood of relinquishing their data; (2) the use of persuasion is the outcome of a difficult tradeoff between costs and benefits; (3) most companies acted in their self-interest and “gamed the system”. Our results have important implications for policymakers, managers, and customers of the online sector.
Resumo:
la tesi esamina l’applicazione del controllo sugli aiuti di Stato in materia fiscale tramite la disamina della giurisprudenza della Corte di giustizia. Il lavoro intende fornire una nuova prospettiva di analisi, valorizzando l’interazione tra la nozione di ‘aiuto fiscale’ ex art. 107(1) TFUE e le categorie tributarie nazionali, sottese all’applicazione del “test in tre fasi” coniato dalla Corte per identificare la presenza di un ‘vantaggio selettivo’. Facendo applicazione di tale prospettiva di analisi, la ricerca propone una nuova categorizzazione della ‘selettività fiscale’, tramite la quale vengono affrontate le tematiche più controverse legate all’applicazione dell’istituto. Infine, considerando i numerosi progetti di riforma della fiscalità diretta attualmente al vaglio delle Istituzioni europee, la tesi si confronta con il “futuro” del controllo sugli aiuti di Stato, identificato nella necessaria interazione con una cornice normativa armonizzata.
Analysis of urban infrastructure for sustainable mobility through instrumented bicycles for students
Resumo:
In Europe almost 80% of the continent's population lives in cities. It is estimated that by 2030 most regions in Europe which contain major cities will have even more inhabitants on 35–60% more than now. This process generates a consequent elevate human pressure on the natural environment, especially around large urban agglomerations. Cities could be seen as an ecosystem, represented by the dominance of humans that re-distribute organisms and fluxes and represent the result of co-evolving human and natural systems, emerging from the interactions between humans, natural and infrastructures. Roads have a relevant role in building links between urban components, creating the basis on which it is founded the urban ecosystem itself. This thesis is focused on the research for a comprehensive model, framed in European urban health & wellbeing programme, aimed to evaluate the determinants of health in urban populations. Through bicycles, GPS and sensor kits, specially developed and produced by University of Bologna for this purpose, it has been possible to conduct on Bologna different direct observations that oriented the novelty of the research: the categorization of university students cyclists, connection among environmental data awareness and level of cycling, and an early identification of urban attributes able to impact on road air quality and level of cycling. The categorization of university students’ cyclist has been defined through GPS analysis and focused survey, that both permit to identify behavioural and technical variables and attitudes towards urban cycling. The statistic relationship between level of cycling, seen as number of bicycles passages per lane and pollutants level, has been investigated through an inverse regression model, defined and tested through SPSS software on the basis of the data harvest. The research project that represents a sort of dynamic mobility laboratory on two wheels, that permits to harvest and study detected parameters.
Resumo:
The central aim of this dissertation is to introduce innovative methods, models, and tools to enhance the overall performance of supply chains responsible for handling perishable products. This concept of improved performance encompasses several critical dimensions, including enhanced efficiency in supply chain operations, product quality, safety, sustainability, waste generation minimization, and compliance with norms and regulations. The research is structured around three specific research questions that provide a solid foundation for delving into and narrowing down the array of potential solutions. These questions primarily concern enhancing the overall performance of distribution networks for perishable products and optimizing the package hierarchy, extending to unconventional packaging solutions. To address these research questions effectively, a well-defined research framework guides the approach. However, the dissertation adheres to an overarching methodological approach that comprises three fundamental aspects. The first aspect centers on the necessity of systematic data sampling and categorization, including identifying critical points within food supply chains. The data collected in this context must then be organized within a customized data structure designed to feed both cyber-physical and digital twins to quantify and analyze supply chain failures with a preventive perspective.