906 resultados para model-based reasoning processes
Resumo:
Species distribution and ecological niche models are increasingly used in biodiversity management and conservation. However, one thing that is important but rarely done is to follow up on the predictive performance of these models over time, to check if their predictions are fulfilled and maintain accuracy, or if they apply only to the set in which they were produced. In 2003, a distribution model of the Eurasian otter (Lutra lutra) in Spain was published, based on the results of a country-wide otter survey published in 1998. This model was built with logistic regression of otter presence-absence in UTM 10 km2 cells on a diverse set of environmental, human and spatial variables, selected according to statistical criteria. Here we evaluate this model against the results of the most recent otter survey, carried out a decade later and after a significant expansion of the otter distribution area in this country. Despite the time elapsed and the evident changes in this species’ distribution, the model maintained a good predictive capacity, considering both discrimination and calibration measures. Otter distribution did not expand randomly or simply towards vicinity areas,m but specifically towards the areas predicted as most favourable by the model based on data from 10 years before. This corroborates the utility of predictive distribution models, at least in the medium term and when they are made with robust methods and relevant predictor variables.
Resumo:
This paper proposes a novel demand response model using a fuzzy subtractive cluster approach. The model development provides support to domestic consumer decisions on controllable loads management, considering consumers’ consumption needs and the appropriate load shape or rescheduling in order to achieve possible economic benefits. The model based on fuzzy subtractive clustering method considers clusters of domestic consumption covering an adequate consumption range. Analysis of different scenarios is presented considering available electric power and electric energy prices. Simulation results are presented and conclusions of the proposed demand response model are discussed.
Resumo:
Nowadays robotic applications are widespread and most of the manipulation tasks are efficiently solved. However, Deformable-Objects (DOs) still represent a huge limitation for robots. The main difficulty in DOs manipulation is dealing with the shape and dynamics uncertainties, which prevents the use of model-based approaches (since they are excessively computationally complex) and makes sensory data difficult to interpret. This thesis reports the research activities aimed to address some applications in robotic manipulation and sensing of Deformable-Linear-Objects (DLOs), with particular focus to electric wires. In all the works, a significant effort was made in the study of an effective strategy for analyzing sensory signals with various machine learning algorithms. In the former part of the document, the main focus concerns the wire terminals, i.e. detection, grasping, and insertion. First, a pipeline that integrates vision and tactile sensing is developed, then further improvements are proposed for each module. A novel procedure is proposed to gather and label massive amounts of training images for object detection with minimal human intervention. Together with this strategy, we extend a generic object detector based on Convolutional-Neural-Networks for orientation prediction. The insertion task is also extended by developing a closed-loop control capable to guide the insertion of a longer and curved segment of wire through a hole, where the contact forces are estimated by means of a Recurrent-Neural-Network. In the latter part of the thesis, the interest shifts to the DLO shape. Robotic reshaping of a DLO is addressed by means of a sequence of pick-and-place primitives, while a decision making process driven by visual data learns the optimal grasping locations exploiting Deep Q-learning and finds the best releasing point. The success of the solution leverages on a reliable interpretation of the DLO shape. For this reason, further developments are made on the visual segmentation.
Resumo:
Entre la fin du Néolithique et l’âge du Bronze, la présence d’habitats groupés de type village est un phénomène diffus, tant en Italie qu’en France méridionale. Néanmoins, la prise en compte de la variabilité des formes de la stratification des sites interroge. En quoi l’enregistrement sédimentaire des sols d’habitat permet-il d’appréhender la question de l’organisation villageoise et de sa variabilité entre la fin du Néolithique et l’âge du Bronze ? Quelle image cet enregistrement sédimentaire donne-t-il de l’organisation sociale et économique du village ? Afin d’aborder ces questions, nous avons choisi de mener une étude géoarchéologique sur des sites de formes différentes, issus de contextes chrono-culturels et environnementaux variés. La démarche, fondée sur l’emploi de la micromorphologie des sols en tant qu’outil analytique, vise à caractériser l’organisation spatio-temporelle des sols d’occupation à l’échelle du site, selon une approche spatiale des processus de formation de la stratification archéologique. L’élaboration d’un modèle, qui repose sur une classification des micro-faciès sédimentaires selon le système d’activité, et son application à des sites-laboratoires permettent de qualifier les techniques de construction en terre, l’usage du sol et les dynamiques d’occupation propres à chaque site, dans le but de déterminer les comportements socio-économiques et les spécificités du mode de vie villageois enregistrées par les sols. Cette approche permet d’évaluer les constantes et les variables qui qualifient les différents types d’occupation. Le sol, conçu comme matérialité de l’espace villageois, devient ainsi un témoignage direct de la variabilité culturelle et des différentes formes d’organisation des communautés de la fin du Néolithique et de l’âge du Bronze.
Resumo:
An analysis and a subsequent solution is here presented. This document is about a groin design able to contrast the erosion actions given by waves in Lido di Dante. Advantages will be visible also for Fiumi Uniti's inlet, in the north side of the shoreline. Beach future progression and growth will be subjected to monitoring actions in the years after groin construction. The resulting effects of the design will have a positive impact not only on the local fauna and environment, but also, a naturalistic appeal will increase making new type of tourists coming not only for recreational purposes. The design phase is focused on possible design alternatives and their features. Particular interest is given to scouring phenomena all around the groin after its construction. Groin effects will impact not only on its south side, instead they will cause an intense erosion process on the downdrift front. Here, many fishing hut would be in danger, thus a beach revetment structure is needed to avoid any future criticality. In addiction, a numerical model based on a generalized shoreline change numerical model, also known as GENESIS, has been applied to the study area in order to perform a simplistic analysis of the shoreline and its future morphology. Critical zones are visible in proximity of the Fiumi Uniti's river inlet, where currents from the sea and the river itself start the erosion process that is affecting Lido di Dante since mid '80s, or even before. The model is affected by several assumptions that make results not to be interpreted as a real future trend of the shore. Instead the model allows the user to have a more clear view about critical processes induced by monochromatic inputed waves. In conclusion, the thesis introduce a wide analysis on a complex erosion process that is affecting many shoreline nowadays. A groin design is seen as a hard solution it is considered to be the only means able to decrease the rate of erosion.
Resumo:
The design optimization of industrial products has always been an essential activity to improve product quality while reducing time-to-market and production costs. Although cost management is very complex and comprises all phases of the product life cycle, the control of geometrical and dimensional variations, known as Dimensional Management (DM), allows compliance with product and process requirements. Hence, the tolerance-cost optimization becomes the main practice to provide an effective application of Design for Tolerancing (DfT) and Design to Cost (DtC) approaches by enabling a connection between product tolerances and associated manufacturing costs. However, despite the growing interest in this topic, a profitable application in the industry of these techniques is hampered by their complexity: the definition of a systematic framework is the key element to improving design optimization, enhancing the concurrent use of Computer-Aided tools and Model-Based Definition (MBD) practices. The present doctorate research aims to define and develop an integrated methodology for product/process design optimization, to better exploit the new capabilities of advanced simulations and tools. By implementing predictive models and multi-disciplinary optimization, a Computer-Aided Integrated framework for tolerance-cost optimization has been proposed to allow the integration of DfT and DtC approaches and their direct application for the design of automotive components. Several case studies have been considered, with the final application of the integrated framework on a high-performance V12 engine assembly, to achieve both functional targets and cost reduction. From a scientific point of view, the proposed methodology provides an improvement for the tolerance-cost optimization of industrial components. The integration of theoretical approaches and Computer-Aided tools allows to analyse the influence of tolerances on both product performance and manufacturing costs. The case studies proved the suitability of the methodology for its application in the industrial field, providing the identification of further areas for improvement and refinement.
Resumo:
The study of random probability measures is a lively research topic that has attracted interest from different fields in recent years. In this thesis, we consider random probability measures in the context of Bayesian nonparametrics, where the law of a random probability measure is used as prior distribution, and in the context of distributional data analysis, where the goal is to perform inference given avsample from the law of a random probability measure. The contributions contained in this thesis can be subdivided according to three different topics: (i) the use of almost surely discrete repulsive random measures (i.e., whose support points are well separated) for Bayesian model-based clustering, (ii) the proposal of new laws for collections of random probability measures for Bayesian density estimation of partially exchangeable data subdivided into different groups, and (iii) the study of principal component analysis and regression models for probability distributions seen as elements of the 2-Wasserstein space. Specifically, for point (i) above we propose an efficient Markov chain Monte Carlo algorithm for posterior inference, which sidesteps the need of split-merge reversible jump moves typically associated with poor performance, we propose a model for clustering high-dimensional data by introducing a novel class of anisotropic determinantal point processes, and study the distributional properties of the repulsive measures, shedding light on important theoretical results which enable more principled prior elicitation and more efficient posterior simulation algorithms. For point (ii) above, we consider several models suitable for clustering homogeneous populations, inducing spatial dependence across groups of data, extracting the characteristic traits common to all the data-groups, and propose a novel vector autoregressive model to study of growth curves of Singaporean kids. Finally, for point (iii), we propose a novel class of projected statistical methods for distributional data analysis for measures on the real line and on the unit-circle.
Resumo:
The thesis investigates the potential of photoactive organic semiconductors as a new class of materials for developing bioelectronic devices that can convert light into biological signals. The materials can be either small molecules or polymers. When these materials interact with aqueous biological fluids, they give rise to various electrochemical phenomena, including photofaradaic or photocapacitive processes, depending on whether photogenerated charges participate in redox processes or accumulate at an interface. The thesis starts by studying the behavior of the H2Pc/PTCDI molecular p/n thin-film heterojunction in contact with aqueous electrolyte. An equivalent circuit model is developed, explaining the measurements and predicting behavior in wireless mode. A systematic study on p-type polymeric thin-films is presented, comparing rr-P3HT with two low bandgap conjugated polymers: PBDB-T and PTB7. The results demonstrate that PTB7 has superior photocurrent performance due to more effective electron-transfer onto acceptor states in solution. Furthermore, the thesis addresses the issue of photovoltage generation for wireless photoelectrodes. An analytical model based on photoactivated charge-transfer across the organic-semiconductor/water interface is developed, explaining the large photovoltages observed for polymeric p-type semiconductor electrodes in water. Then, flash-precipitated nanoparticles made of the same three photoactive polymers are investigated, assessing the influence of fabrication parameters on the stability, structure, and energetics of the nanoparticles. Photocathodic current generation and consequent positive charge accumulation is also investigated. Additionally, newly developed porous P3HT thin-films are tested, showing that porosity increases both the photocurrent and the semiconductor/water interfacial capacity. Finally, the thesis demonstrates the biocompatibility of the materials in in-vitro experiments and shows safe levels of photoinduced intracellular ROS production with p-type polymeric thin-films and nanoparticles. The findings highlight the potential of photoactive organic semiconductors in the development of optobioelectronic devices, demonstrating their ability to convert light into biological signals and interface with biological fluids.
Resumo:
Nowadays, product development in all its phases plays a fundamental role in the industrial chain. The need for a company to compete at high levels, the need to be quick in responding to market demands and therefore to be able to engineer the product quickly and with a high level of quality, has led to the need to get involved in new more advanced methods/ processes. In recent years, we are moving away from the concept of 2D-based design and production and approaching the concept of Model Based Definition. By using this approach, increasingly complex systems turn out to be easier to deal with but above all cheaper in obtaining them. Thanks to the Model Based Definition it is possible to share data in a lean and simple way to the entire engineering and production chain of the product. The great advantage of this approach is precisely the uniqueness of the information. In this specific thesis work, this approach has been exploited in the context of tolerances with the aid of CAD / CAT software. Tolerance analysis or dimensional variation analysis is a way to understand how sources of variation in part size and assembly constraints propagate between parts and assemblies and how that range affects the ability of a project to meet its requirements. It is critically important to note how tolerance directly affects the cost and performance of products. Worst Case Analysis (WCA) and Statistical analysis (RSS) are the two principal methods in DVA. The thesis aims to show the advantages of using statistical dimensional analysis by creating and examining various case studies, using PTC CREO software for CAD modeling and CETOL 6σ for tolerance analysis. Moreover, it will be provided a comparison between manual and 3D analysis, focusing the attention to the information lost in the 1D case. The results obtained allow us to highlight the need to use this approach from the early stages of the product design cycle.
Resumo:
In acquired immunodeficiency syndrome (AIDS) studies it is quite common to observe viral load measurements collected irregularly over time. Moreover, these measurements can be subjected to some upper and/or lower detection limits depending on the quantification assays. A complication arises when these continuous repeated measures have a heavy-tailed behavior. For such data structures, we propose a robust structure for a censored linear model based on the multivariate Student's t-distribution. To compensate for the autocorrelation existing among irregularly observed measures, a damped exponential correlation structure is employed. An efficient expectation maximization type algorithm is developed for computing the maximum likelihood estimates, obtaining as a by-product the standard errors of the fixed effects and the log-likelihood function. The proposed algorithm uses closed-form expressions at the E-step that rely on formulas for the mean and variance of a truncated multivariate Student's t-distribution. The methodology is illustrated through an application to an Human Immunodeficiency Virus-AIDS (HIV-AIDS) study and several simulation studies.
Resumo:
Extracts from malagueta pepper (Capsicum frutescens L.) were obtained using supercritical fluid extraction (SFE) assisted by ultrasound, with carbon dioxide as solvent at 15MPa and 40°C. The SFE global yield increased up to 77% when ultrasound waves were applied, and the best condition of ultrasound-assisted extraction was ultrasound power of 360W applied during 60min. Four capsaicinoids were identified in the extracts and quantified by high performance liquid chromatography. The use of ultrasonic waves did not influence significantly the capsaicinoid profiles and the phenolic content of the extracts. However, ultrasound has enhanced the SFE rate. A model based on the broken and intact cell concept was adequate to represent the extraction kinetics and estimate the mass transfer coefficients, which were increased with ultrasound. Images obtained by field emission scanning electron microscopy showed that the action of ultrasonic waves did not cause cracks on the cell wall surface. On the other hand, ultrasound promoted disturbances in the vegetable matrix, leading to the release of extractable material on the solid surface. The effects of ultrasound were more significant on SFE from larger solid particles.
Resumo:
High levels of substrate-based 1,5-stereoinduction are obtained in the boron-mediated aldol reactions of beta-oxygenated methyl ketones with achiral and chiral aldehydes. Remote induction from the boron enolates gives the 1,5-anti adducts, with the enolate pi-facial selectivity critically dependent upon the nature of the beta-alkoxy protecting group. This 1,5-anti aldol methodology has been strategically employed in the total synthesis of several natural products. At present, the origin of the high level of 1,5-anti induction obtained with the boron enolates is unclear, although a model based on a hydrogen bonding between the alkoxy oxygen and the formyl hydrogen has been recently proposed.
Resumo:
In the last years, a great interest in nonequilibrium systems has been witnessed. Although the Master Equations are one of the most common methods used to describe these systems, the literature about these equations is not straightforward due to the mathematical framework used in their derivations. The goals of this work are to present the physical concepts behind the Master Equations development and to discuss their basic proprieties via a matrix approach. It is also shown how the Master Equations can be used to model typical nonequilibrium processes like multi-wells chemical reactions and radiation absorption processes.
Resumo:
Gene clustering is a useful exploratory technique to group together genes with similar expression levels under distinct cell cycle phases or distinct conditions. It helps the biologist to identify potentially meaningful relationships between genes. In this study, we propose a clustering method based on multivariate normal mixture models, where the number of clusters is predicted via sequential hypothesis tests: at each step, the method considers a mixture model of m components (m = 2 in the first step) and tests if in fact it should be m - 1. If the hypothesis is rejected, m is increased and a new test is carried out. The method continues (increasing m) until the hypothesis is accepted. The theoretical core of the method is the full Bayesian significance test, an intuitive Bayesian approach, which needs no model complexity penalization nor positive probabilities for sharp hypotheses. Numerical experiments were based on a cDNA microarray dataset consisting of expression levels of 205 genes belonging to four functional categories, for 10 distinct strains of Saccharomyces cerevisiae. To analyze the method's sensitivity to data dimension, we performed principal components analysis on the original dataset and predicted the number of classes using 2 to 10 principal components. Compared to Mclust (model-based clustering), our method shows more consistent results.
Resumo:
A compact frequency standard based on an expanding cold (133)CS cloud is under development in our laboratory. In a first experiment, Cs cold atoms were prepared by a magneto-optical trap in a vapor cell, and a microwave antenna was used to transmit the radiation for the clock transition. The signal obtained from fluorescence of the expanding cold atoms cloud is used to lock a microwave chain. In this way the overall system stability is evaluated. A theoretical model based on a two-level system interacting with the two microwave pulses enables interpretation for the observed features, especially the poor Ramsey fringes contrast. (C) 2008 Optical Society of America.