13 resultados para Recall-Precision Curves
em AMS Tesi di Dottorato - Alm@DL - Università di Bologna
Resumo:
Studying moduli spaces of semistable Higgs bundles (E, \phi) of rank n on a smooth curve C, a key role is played by the spectral curve X (Hitchin), because an important result by Beauville-Narasimhan-Ramanan allows us to study isomorphism classes of such Higgs bundles in terms of isomorphism classes of rank-1 torsion-free sheaves on X. This way, the generic fibre of the Hitchin map, which associates to any semistable Higgs bundle the coefficients of the characteristic polynomial of \phi, is isomorphic to the Jacobian of X. Focusing on rank-2 Higgs data, this construction was extended by Barik to the case in which the curve C is reducible, one-nodal, having two smooth components. Such curve is called of compact type because its Picard group is compact. In this work, we describe and clarify the main points of the construction by Barik and we give examples, especially concerning generic fibres of the Hitchin map. Referring to Hausel-Pauly, we consider the case of SL(2,C)-Higgs bundles on a smooth base curve, which are such that the generic fibre of the Hitchin map is a subvariety of the Jacobian of X, the Prym variety. We recall the description of special loci, called endoscopic loci, such that the associated Prym variety is not connected. Then, letting G be an affine reductive group having underlying Lie algebra so(4,C), we consider G-Higgs bundles on a smooth base curve. Starting from the construction by Bradlow-Schaposnik, we discuss the associated endoscopic loci. By adapting these studies to a one-nodal base curve of compact type, we describe the fibre of the SL(2,C)-Hitchin map and of the G-Hitchin map, together with endoscopic loci. In the Appendix, we give an interpretation of generic spectral curves in terms of families of double covers.
Resumo:
Traceability is often perceived by food industry executives as an additional cost of doing business, one to be avoided if possible. However, a traceability system can in fact comply the regulatory requirements, increase food safety and recall performance, improving marketing performances and, as well as, improving supply chain management. Thus, traceability affects business performances of firms in terms of costs and benefits determined by traceability practices. Costs and benefits affect factors such as, firms’ characteristics, level of traceability and ,lastly, costs and benefits perceived prior to traceability implementation. This thesis was undertaken to understand how these factors are linked to affect the outcome of costs and benefits. Analysis of the results of a plant level survey of the Italian ichthyic processing industry revealed that processors generally adopt various level of traceability while government support appears to increase the level of traceability and the expectations and actual costs and benefits. None of the firms’ characteristics, with the exception of government support, influences costs and level of traceability. Only size of firms and level of QMS certifications are linked with benefits while precision of traceability increases benefits without affecting costs. Finally, traceability practices appear due to the request from “external“ stakeholders such as government, authority and customers rather than “internal” factors (e.g. improving the firm management) while the traceability system does not provide any added value from the market in terms of price premium or market share increase.
Resumo:
Ren and colleagues (2006) found that saccades to visual targets became less accurate when somatosensory information about hand location was added, suggesting that saccades rely mainly on vision. We conducted two kinematic experiments to examine whether or not reaching movements would also show such strong reliance on vision. In Experiment 1, subjects used their dominant right hand to perform reaches, with or without a delay, to an external visual target or to their own left fingertip positioned either by the experimenter or by the participant. Unlike saccades, reaches became more accurate and precise when proprioceptive information was available. In Experiment 2, subjects reached toward external or bodily targets with differing amounts of visual information. Proprioception improved performance only when vision was limited. Our results indicate that reaching movements, unlike saccades, are improved rather than impaired by the addition of somatosensory information.
Resumo:
Precision horticulture and spatial analysis applied to orchards are a growing and evolving part of precision agriculture technology. The aim of this discipline is to reduce production costs by monitoring and analysing orchard-derived information to improve crop performance in an environmentally sound manner. Georeferencing and geostatistical analysis coupled to point-specific data mining allow to devise and implement management decisions tailored within the single orchard. Potential applications range from the opportunity to verify in real time along the season the effectiveness of cultural practices to achieve the production targets in terms of fruit size, number, yield and, in a near future, fruit quality traits. These data will impact not only the pre-harvest but their effect will extend to the post-harvest sector of the fruit chain. Chapter 1 provides an updated overview on precision horticulture , while in Chapter 2 a preliminary spatial statistic analysis of the variability in apple orchards is provided before and after manual thinning; an interpretation of this variability and how it can be managed to maximize orchard performance is offered. Then in Chapter 3 a stratification of spatial data into management classes to interpret and manage spatial variation on the orchard is undertaken. An inverse model approach is also applied to verify whether the crop production explains environmental variation. In Chapter 4 an integration of the techniques adopted before is presented. A new key for reading the information gathered within the field is offered. The overall goal of this Dissertation was to probe into the feasibility, the desirability and the effectiveness of a precision approach to fruit growing, following the lines of other areas of agriculture that already adopt this management tool. As existing applications of precision horticulture already had shown, crop specificity is an important factor to be accounted for. This work focused on apple because of its importance in the area where the work was carried out, and worldwide.
Resumo:
«Fiction of frontier». Phenomenology of an open form/voice. Francesco Giustini’s PhD dissertation fits into a genre of research usually neglected by the literary criticism which nevertheless is arousing much interest in recent years: the relationship between Literature and Space. In this context, the specific issue of his work consists in the category of the Frontier including its several implications for the XX century fiction. The preliminary step, at the beginning of the first section of the dissertation, is a semantic analysis: with precision, Giustini describes the meaning of the word “frontier” here declined in a multiplicity of cultural, political and geographical contexts, starting from the American frontier of the pioneers who headed for the West, to the exotic frontiers of the world, with whose the imperialistic colonization has come into contact; from the semi-uninhabited areas like deserts, highlands and virgin forests, to the ethnic frontiers between Indian and white people in South America, since the internal frontiers of the Countries like those ones between the District and the Capital City, the Centre and the Outskirts. In the next step, Giustini wants to focus on a real “ myth of the frontier”, able to nourish cultural and literary imagination. Indeed, the literature has told and chosen the frontier as the scenery for many stories; especially in the 20th Century it made the frontier a problematic space in the light of events and changes that have transformed the perception of space and our relationship with it. Therefore, the dissertation proposes a critical category, it traces the hallmarks of a specific literary phenomenon defined “ Fiction of the frontier” ,present in many literary traditions during the 20th Century. The term “Fiction” (not “Literature” or “Poetics”) does not define a genre but rather a “procedure”, focusing on a constant issue pointed out from the texts examined in this work : the strong call to the act of narration and to its oral traditions. The “Fiction of the Frontier” is perceived as an approach to the world, a way of watching and feeling the objects, an emotion that is lived and told through the story- a story where the narrator ,through his body and his voice, takes the rule of the witness. The following parts, that have an analytic style, are constructed on the basis of this theoretical and methodological reflection. The second section gives a wide range of examples into we can find the figure and the myth of the frontier through the textual analysis which range over several literary traditions. Starting from monographic chapters (Garcia Marquez, Callado, McCarthy), to the comparative reading of couples of texts (Calvino and Verga Llosa, Buzzati and Coetzee, Arguedas and Rulfo). The selection of texts is introduced so as to underline a particular aspect or a form of the frontier at every reading. This section is articulated into thematic voices which recall some actions that can be taken into the ambiguous and liminal space of the frontier (to communicate, to wait, to “trans-culturate”, to imagine, to live in, to not-live in). In this phenomenology, the frontier comes to the light as a physical and concrete element or as a cultural, imaginary, linguistic, ethnic and existential category. In the end, the third section is centered on a more defined and elaborated analysis of two authors, considered as fundamental for the comprehension of the “Fiction of the frontier”: Joseph Conrad and João Guimarães Rosa. Even if they are very different, being part of unlike literary traditions, these two authors show many connections which are pointed by the comparative analysis. Maybe Conrad is the first author that understand the feeling of the frontier , freeing himself from the adventure romance and from the exotic nineteenthcentury tradition. João Guimarães Rosa, in his turn, is the great narrator of Brazilian and South American frontier, he is the man of sertão and of endless spaces of the Centre of Brazil. His production is strongly linked to that one belonged to the author of Heart of Darkness.
Resumo:
The Gaia space mission is a major project for the European astronomical community. As challenging as it is, the processing and analysis of the huge data-flow incoming from Gaia is the subject of thorough study and preparatory work by the DPAC (Data Processing and Analysis Consortium), in charge of all aspects of the Gaia data reduction. This PhD Thesis was carried out in the framework of the DPAC, within the team based in Bologna. The task of the Bologna team is to define the calibration model and to build a grid of spectro-photometric standard stars (SPSS) suitable for the absolute flux calibration of the Gaia G-band photometry and the BP/RP spectrophotometry. Such a flux calibration can be performed by repeatedly observing each SPSS during the life-time of the Gaia mission and by comparing the observed Gaia spectra to the spectra obtained by our ground-based observations. Due to both the different observing sites involved and the huge amount of frames expected (≃100000), it is essential to maintain the maximum homogeneity in data quality, acquisition and treatment, and a particular care has to be used to test the capabilities of each telescope/instrument combination (through the “instrument familiarization plan”), to devise methods to keep under control, and eventually to correct for, the typical instrumental effects that can affect the high precision required for the Gaia SPSS grid (a few % with respect to Vega). I contributed to the ground-based survey of Gaia SPSS in many respects: with the observations, the instrument familiarization plan, the data reduction and analysis activities (both photometry and spectroscopy), and to the maintenance of the data archives. However, the field I was personally responsible for was photometry and in particular relative photometry for the production of short-term light curves. In this context I defined and tested a semi-automated pipeline which allows for the pre-reduction of imaging SPSS data and the production of aperture photometry catalogues ready to be used for further analysis. A series of semi-automated quality control criteria are included in the pipeline at various levels, from pre-reduction, to aperture photometry, to light curves production and analysis.
Resumo:
The aim of this work is to provide a precise and accurate measurement of the 238U(n,gamma) reaction cross-section. This reaction is of fundamental importance for the design calculations of nuclear reactors, governing the behaviour of the reactor core. In particular, fast neutron reactors, which are experiencing a growing interest for their ability to burn radioactive waste, operate in the high energy region of the neutron spectrum. In this energy region inconsistencies between the existing measurements are present up to 15%, and the most recent evaluations disagree each other. In addition, the assessment of nuclear data uncertainty performed for innovative reactor systems shows that the uncertainty in the radiative capture cross-section of 238U should be further reduced to 1-3% in the energy region from 20 eV to 25 keV. To this purpose, addressed by the Nuclear Energy Agency as a priority nuclear data need, complementary experiments, one at the GELINA and two at the n_TOF facility, were scheduled within the ANDES project within the 7th Framework Project of the European Commission. The results of one of the 238U(n,gamma) measurement performed at the n_TOF CERN facility are presented in this work, carried out with a detection system constituted of two liquid scintillators. The very accurate cross section from this work is compared with the results obtained from the other measurement performed at the n_TOF facility, which exploit a different and complementary detection technique. The excellent agreement between the two data-sets points out that they can contribute to the reduction of the cross section uncertainty down to the required 1-3%.
Resumo:
Despite the scientific achievement of the last decades in the astrophysical and cosmological fields, the majority of the Universe energy content is still unknown. A potential solution to the “missing mass problem” is the existence of dark matter in the form of WIMPs. Due to the very small cross section for WIMP-nuleon interactions, the number of expected events is very limited (about 1 ev/tonne/year), thus requiring detectors with large target mass and low background level. The aim of the XENON1T experiment, the first tonne-scale LXe based detector, is to be sensitive to WIMP-nucleon cross section as low as 10^-47 cm^2. To investigate the possibility of such a detector to reach its goal, Monte Carlo simulations are mandatory to estimate the background. To this aim, the GEANT4 toolkit has been used to implement the detector geometry and to simulate the decays from the various background sources: electromagnetic and nuclear. From the analysis of the simulations, the level of background has been found totally acceptable for the experiment purposes: about 1 background event in a 2 tonne-years exposure. Indeed, using the Maximum Gap method, the XENON1T sensitivity has been evaluated and the minimum for the WIMP-nucleon cross sections has been found at 1.87 x 10^-47 cm^2, at 90% CL, for a WIMP mass of 45 GeV/c^2. The results have been independently cross checked by using the Likelihood Ratio method that confirmed such results with an agreement within less than a factor two. Such a result is completely acceptable considering the intrinsic differences between the two statistical methods. Thus, in the PhD thesis it has been proven that the XENON1T detector will be able to reach the designed sensitivity, thus lowering the limits on the WIMP-nucleon cross section by about 2 orders of magnitude with respect to the current experiments.
Resumo:
The presented study aimed to evaluate the productive and physiological behavior of a 2D multileader apple training systems in the Italian environment both investigating the possibility to increase yield and precision crop load management resolution. Another objective was to find valuable thinning thresholds guaranteeing high yields and matching fruit market requirements. The thesis consists in three studies carried out in a Pink Lady®- Rosy Glow apple orchard trained as a planar multileader training system (double guyot). Fruiting leaders (uprights) dimension, crop load, fruit quality, flower and physiological (leaf gas exchanges and fruit growth rate) data were collected and analysed. The obtained results found that uprights present dependence among each other and as well as a mutual support during fruit development. However, individual upright fruit load and upright’s fruit load distribution on the tree (~ plant crop load) seems to define both upright independence from the other, and single upright crop load effects on the final fruit quality production. Correlations between fruit load and harvest fruit size were found and thanks to that valuable thinning thresholds, based on different vegetative parameters, were obtained. Moreover, it comes out that an upright’s fruit load random distribution presents a widening of those thinning thresholds, keeping un-altered fruit quality. For this reason, uprights resulted a partially physiologically-dependent plant unit. Therefore, if considered and managed as independent, then no major problems on final fruit quality and production occurred. This partly confirmed the possibility to shift crop load management to single upright. The finding of the presented studies together with the benefits coming from multileader planar training systems suggest a high potentiality of the 2D multileader training systems to increase apple production sustainability and profitability for Italian apple orchard, while easing the advent of automation in fruit production.
Resumo:
With the advent of new technologies it is increasingly easier to find data of different nature from even more accurate sensors that measure the most disparate physical quantities and with different methodologies. The collection of data thus becomes progressively important and takes the form of archiving, cataloging and online and offline consultation of information. Over time, the amount of data collected can become so relevant that it contains information that cannot be easily explored manually or with basic statistical techniques. The use of Big Data therefore becomes the object of more advanced investigation techniques, such as Machine Learning and Deep Learning. In this work some applications in the world of precision zootechnics and heat stress accused by dairy cows are described. Experimental Italian and German stables were involved for the training and testing of the Random Forest algorithm, obtaining a prediction of milk production depending on the microclimatic conditions of the previous days with satisfactory accuracy. Furthermore, in order to identify an objective method for identifying production drops, compared to the Wood model, typically used as an analytical model of the lactation curve, a Robust Statistics technique was used. Its application on some sample lactations and the results obtained allow us to be confident about the use of this method in the future.
Resumo:
Agricultural techniques have been improved over the centuries to match with the growing demand of an increase in global population. Farming applications are facing new challenges to satisfy global needs and the recent technology advancements in terms of robotic platforms can be exploited. As the orchard management is one of the most challenging applications because of its tree structure and the required interaction with the environment, it was targeted also by the University of Bologna research group to provide a customized solution addressing new concept for agricultural vehicles. The result of this research has blossomed into a new lightweight tracked vehicle capable of performing autonomous navigation both in the open-filed scenario and while travelling inside orchards for what has been called in-row navigation. The mechanical design concept, together with customized software implementation has been detailed to highlight the strengths of the platform and some further improvements envisioned to improve the overall performances. Static stability testing has proved that the vehicle can withstand steep slopes scenarios. Some improvements have also been investigated to refine the estimation of the slippage that occurs during turning maneuvers and that is typical of skid-steering tracked vehicles. The software architecture has been implemented using the Robot Operating System (ROS) framework, so to exploit community available packages related to common and basic functions, such as sensor interfaces, while allowing dedicated custom implementation of the navigation algorithm developed. Real-world testing inside the university’s experimental orchards have proven the robustness and stability of the solution with more than 800 hours of fieldwork. The vehicle has also enabled a wide range of autonomous tasks such as spraying, mowing, and on-the-field data collection capabilities. The latter can be exploited to automatically estimate relevant orchard properties such as fruit counting and sizing, canopy properties estimation, and autonomous fruit harvesting with post-harvesting estimations.
Resumo:
Protected crop production is a modern and innovative approach to cultivating plants in a controlled environment to optimize growth, yield, and quality. This method involves using structures such as greenhouses or tunnels to create a sheltered environment. These productive solutions are characterized by a careful regulation of variables like temperature, humidity, light, and ventilation, which collectively contribute to creating an optimal microclimate for plant growth. Heating, cooling, and ventilation systems are used to maintain optimal conditions for plant growth, regardless of external weather fluctuations. Protected crop production plays a crucial role in addressing challenges posed by climate variability, population growth, and food security. Similarly, animal husbandry involves providing adequate nutrition, housing, medical care and environmental conditions to ensure animal welfare. Then, sustainability is a critical consideration in all forms of agriculture, including protected crop and animal production. Sustainability in animal production refers to the practice of producing animal products in a way that minimizes negative impacts on the environment, promotes animal welfare, and ensures the long-term viability of the industry. Then, the research activities performed during the PhD can be inserted exactly in the field of Precision Agriculture and Livestock farming. Here the focus is on the computational fluid dynamic (CFD) approach and environmental assessment applied to improve yield, resource efficiency, environmental sustainability, and cost savings. It represents a significant shift from traditional farming methods to a more technology-driven, data-driven, and environmentally conscious approach to crop and animal production. On one side, CFD is powerful and precise techniques of computer modeling and simulation of airflows and thermo-hygrometric parameters, that has been applied to optimize the growth environment of crops and the efficiency of ventilation in pig barns. On the other side, the sustainability aspect has been investigated and researched in terms of Life Cycle Assessment analyses.
Resumo:
Anche se l'isteroscopia con la biopsia endometriale è il gold standard nella diagnosi della patologia intracavitaria uterina, l'esperienza dell’isteroscopista è fondamentale per una diagnosi corretta. Il Deep Learning (DL) come metodica di intelligenza artificiale potrebbe essere un aiuto per superare questo limite. Sono disponibili pochi studi con risultati preliminari e mancano ricerche che valutano le prestazioni dei modelli di DL nell'identificazione delle lesioni intrauterine e il possibile aiuto derivato dai fattori clinici. Obiettivo: Sviluppare un modello di DL per identificare e classificare le patologie endocavitarie uterine dalle immagini isteroscopiche. Metodi: È stato eseguito uno studio di coorte retrospettivo osservazionale monocentrico su una serie consecutiva di casi isteroscopici di pazienti con patologia intracavitaria uterina confermata all’esame istologico eseguiti al Policlinico S. Orsola. Le immagini isteroscopiche sono state usate per costruire un modello di DL per la classificazione e l'identificazione delle lesioni intracavitarie con e senza l'aiuto di fattori clinici (età, menopausa, AUB, terapia ormonale e tamoxifene). Come risultati dello studio abbiamo calcolato le metriche diagnostiche del modello di DL nella classificazione e identificazione delle lesioni uterine intracavitarie con e senza l'aiuto dei fattori clinici. Risultati: Abbiamo esaminato 1.500 immagini provenienti da 266 casi: 186 pazienti avevano lesioni focali benigne, 25 lesioni diffuse benigne e 55 lesioni preneoplastiche/neoplastiche. Sia per quanto riguarda la classificazione che l’identificazione, le migliori prestazioni sono state raggiunte con l'aiuto dei fattori clinici, complessivamente con precision dell'80,11%, recall dell'80,11%, specificità del 90,06%, F1 score dell’80,11% e accuratezza dell’86,74% per la classificazione. Per l’identificazione abbiamo ottenuto un rilevamento complessivo dell’85,82%, precision 93,12%, recall del 91,63% ed F1 score del 92,37%. Conclusioni: Il modello DL ha ottenuto una bassa performance nell’identificazione e classificazione delle lesioni intracavitarie uterine dalle immagini isteroscopiche. Anche se la migliore performance diagnostica è stata ottenuta con l’aiuto di fattori clinici specifici, questo miglioramento è stato scarso.