967 resultados para Modeling methods
Resumo:
Measurement and modeling techniques were developed to improve over-water gaseous air-water exchange measurements for persistent bioaccumulative and toxic chemicals (PBTs). Analytical methods were applied to atmospheric measurements of hexachlorobenzene (HCB), polychlorinated biphenyls (PCBs), and polybrominated diphenyl ethers (PBDEs). Additionally, the sampling and analytical methods are well suited to study semivolatile organic compounds (SOCs) in air with applications related to secondary organic aerosol formation, urban, and indoor air quality. A novel gas-phase cleanup method is described for use with thermal desorption methods for analysis of atmospheric SOCs using multicapillary denuders. The cleanup selectively removed hydrogen-bonding chemicals from samples, including much of the background matrix of oxidized organic compounds in ambient air, and thereby improved precision and method detection limits for nonpolar analytes. A model is presented that predicts gas collection efficiency and particle collection artifact for SOCs in multicapillary denuders using polydimethylsiloxane (PDMS) sorbent. An approach is presented to estimate the equilibrium PDMS-gas partition coefficient (Kpdms) from an Abraham solvation parameter model for any SOC. A high flow rate (300 L min-1) multicapillary denuder was designed for measurement of trace atmospheric SOCs. Overall method precision and detection limits were determined using field duplicates and compared to the conventional high-volume sampler method. The high-flow denuder is an alternative to high-volume or passive samplers when separation of gas and particle-associated SOCs upstream of a filter and short sample collection time are advantageous. A Lagrangian internal boundary layer transport exchange (IBLTE) Model is described. The model predicts the near-surface variation in several quantities with fetch in coastal, offshore flow: 1) modification in potential temperature and gas mixing ratio, 2) surface fluxes of sensible heat, water vapor, and trace gases using the NOAA COARE Bulk Algorithm and Gas Transfer Model, 3) vertical gradients in potential temperature and mixing ratio. The model was applied to interpret micrometeorological measurements of air-water exchange flux of HCB and several PCB congeners in Lake Superior. The IBLTE Model can be applied to any scalar, including water vapor, carbon dioxide, dimethyl sulfide, and other scalar quantities of interest with respect to hydrology, climate, and ecosystem science.
Resumo:
Adaptability and invisibility are hallmarks of modern terrorism, and keeping pace with its dynamic nature presents a serious challenge for societies throughout the world. Innovations in computer science have incorporated applied mathematics to develop a wide array of predictive models to support the variety of approaches to counterterrorism. Predictive models are usually designed to forecast the location of attacks. Although this may protect individual structures or locations, it does not reduce the threat—it merely changes the target. While predictive models dedicated to events or social relationships receive much attention where the mathematical and social science communities intersect, models dedicated to terrorist locations such as safe-houses (rather than their targets or training sites) are rare and possibly nonexistent. At the time of this research, there were no publically available models designed to predict locations where violent extremists are likely to reside. This research uses France as a case study to present a complex systems model that incorporates multiple quantitative, qualitative and geospatial variables that differ in terms of scale, weight, and type. Though many of these variables are recognized by specialists in security studies, there remains controversy with respect to their relative importance, degree of interaction, and interdependence. Additionally, some of the variables proposed in this research are not generally recognized as drivers, yet they warrant examination based on their potential role within a complex system. This research tested multiple regression models and determined that geographically-weighted regression analysis produced the most accurate result to accommodate non-stationary coefficient behavior, demonstrating that geographic variables are critical to understanding and predicting the phenomenon of terrorism. This dissertation presents a flexible prototypical model that can be refined and applied to other regions to inform stakeholders such as policy-makers and law enforcement in their efforts to improve national security and enhance quality-of-life.
Resumo:
The anisotropy of the Biscayne Aquifer which serves as the source of potable water for Miami-Dade County was investigated by applying geophysical methods. Electrical resistivity imaging, self potential and ground penetration radar techniques were employed in both regional and site specific studies. In the regional study, electrical anisotropy and resistivity variation with depth were investigated with azimuthal square array measurements at 13 sites. The observed coefficient of electrical anisotropy ranged from 1.01 to 1.36. The general direction of measured anisotropy is uniform for most sites and trends W-E or SE-NW irrespective of depth. Measured electrical properties were used to estimate anisotropic component of the secondary porosity and hydraulic anisotropy which ranged from 1 to 11% and 1.18 to 2.83 respectively. 1-D sounding analysis was used to models the variation of formation resistivity with depth. Resistivities decreased from NW (close to the margins of the everglades) to SE on the shores of Biscayne Bay. Porosity calculated from Archie's law, ranged from 18 to 61% with higher values found along the ridge. Higher anisotropy, porosities and hydraulic conductivities were on the Atlantic Coastal Ridge and lower values at low lying areas west of the ridge. The cause of higher anisotropy and porosity is attributed to higher dissolution rates of the oolitic facies of the Miami Formation composing the ridge. The direction of minimum resistivity from this study is similar to the predevelopment groundwater flow direction indicated in published modeling studies. Detailed investigations were carried out to evaluate higher anisotropy at West Perrine Park located on the ridge and Snapper Creek Municipal well field where the anisotropy trend changes with depth. The higher anisotropy is attributed to the presence of solution cavities oriented in the E-SE direction on the ridge. Similarly, the change in hydraulic anisotropy at the well field might be related to solution cavities, the surface canal and groundwater extraction wells.
Resumo:
Every space launch increases the overall amount of space debris. Satellites have limited awareness of nearby objects that might pose a collision hazard. Astrometric, radiometric, and thermal models for the study of space debris in low-Earth orbit have been developed. This modeled approach proposes analysis methods that provide increased Local Area Awareness for satellites in low-Earth and geostationary orbit. Local Area Awareness is defined as the ability to detect, characterize, and extract useful information regarding resident space objects as they move through the space environment surrounding a spacecraft. The study of space debris is of critical importance to all space-faring nations. Characterization efforts are proposed using long-wave infrared sensors for space-based observations of debris objects in low-Earth orbit. Long-wave infrared sensors are commercially available and do not require solar illumination to be observed, as their received signal is temperature dependent. The characterization of debris objects through means of passive imaging techniques allows for further studies into the origination, specifications, and future trajectory of debris objects. Conclusions are made regarding the aforementioned thermal analysis as a function of debris orbit, geometry, orientation with respect to time, and material properties. Development of a thermal model permits the characterization of debris objects based upon their received long-wave infrared signals. Information regarding the material type, size, and tumble-rate of the observed debris objects are extracted. This investigation proposes the utilization of long-wave infrared radiometric models of typical debris to develop techniques for the detection and characterization of debris objects via signal analysis of unresolved imagery. Knowledge regarding the orbital type and semi-major axis of the observed debris object are extracted via astrometric analysis. This knowledge may aid in the constraint of the admissible region for the initial orbit determination process. The resultant orbital information is then fused with the radiometric characterization analysis enabling further characterization efforts of the observed debris object. This fused analysis, yielding orbital, material, and thermal properties, significantly increases a satellite’s Local Area Awareness via an intimate understanding of the debris environment surrounding the spacecraft.
Resumo:
In knowledge technology work, as expressed by the scope of this conference, there are a number of communities, each uncovering new methods, theories, and practices. The Library and Information Science (LIS) community is one such community. This community, through tradition and innovation, theories and practice, organizes knowledge and develops knowledge technologies formed by iterative research hewn to the values of equal access and discovery for all. The Information Modeling community is another contributor to knowledge technologies. It concerns itself with the construction of symbolic models that capture the meaning of information and organize it in ways that are computer-based, but human understandable. A recent paper that examines certain assumptions in information modeling builds a bridge between these two communities, offering a forum for a discussion on common aims from a common perspective. In a June 2000 article, Parsons and Wand separate classes from instances in information modeling in order to free instances from what they call the “tyranny” of classes. They attribute a number of problems in information modeling to inherent classification – or the disregard for the fact that instances can be conceptualized independent of any class assignment. By faceting instances from classes, Parsons and Wand strike a sonorous chord with classification theory as understood in LIS. In the practice community and in the publications of LIS, faceted classification has shifted the paradigm of knowledge organization theory in the twentieth century. Here, with the proposal of inherent classification and the resulting layered information modeling, a clear line joins both the LIS classification theory community and the information modeling community. Both communities have their eyes turned toward networked resource discovery, and with this conceptual conjunction a new paradigmatic conversation can take place. Parsons and Wand propose that the layered information model can facilitate schema integration, schema evolution, and interoperability. These three spheres in information modeling have their own connotation, but are not distant from the aims of classification research in LIS. In this new conceptual conjunction, established by Parsons and Ward, information modeling through the layered information model, can expand the horizons of classification theory beyond LIS, promoting a cross-fertilization of ideas on the interoperability of subject access tools like classification schemes, thesauri, taxonomies, and ontologies. This paper examines the common ground between the layered information model and faceted classification, establishing a vocabulary and outlining some common principles. It then turns to the issue of schema and the horizons of conventional classification and the differences between Information Modeling and Library and Information Science. Finally, a framework is proposed that deploys an interpretation of the layered information modeling approach in a knowledge technologies context. In order to design subject access systems that will integrate, evolve and interoperate in a networked environment, knowledge organization specialists must consider a semantic class independence like Parsons and Wand propose for information modeling.
Resumo:
Purpose – The purpose of this paper is to propose a theoretical framework, based on contemporary philosophical aesthetics, from which principled assessments of the aesthetic value of information organization frameworks may be conducted.Design/methodology/approach – This paper identifies appropriate discourses within the field of philosophical aesthetics, constructs from them a framework for assessing aesthetic properties of information organization frameworks. This framework is then applied in two case studies examining the Library of Congress Subject Headings (LCSH), and Sexual Nomenclature: A Thesaurus. Findings – In both information organization frameworks studied, the aesthetic analysis was useful in identifying judgments of the frameworks as aesthetic judgments, in promoting discovery of further areas of aesthetic judgments, and in prompting reflection on the nature of these aesthetic judgments. Research limitations/implications – This study provides proof-of-concept for the aesthetic evaluation of information organization frameworks. Areas of future research are identified as the role of cultural relativism in such aesthetic evaluation and identification of appropriate aesthetic properties of information organization frameworks.Practical implications – By identifying a subset of judgments of information organization frameworks as aesthetic judgments, aesthetic evaluation of such frameworks can be made explicit and principled. Aesthetic judgments can be separated from questions of economic feasibility, functional requirements, and user-orientation. Design and maintenance of information organization frameworks can be based on these principles.Originality/value – This study introduces a new evaluative axis for information organization frameworks based on philosophical aesthetics. By improving the evaluation of such novel frameworks, design and maintenance can be guided by these principles.Keywords Evaluation, Research methods, Analysis, Bibliographic systems, Indexes, Retrieval languages
Resumo:
This work aims to study the application of Genetic Algorithms in anaerobic digestion modeling, in particular when using dynamical models. Along the work, different types of bioreactors are shown, such as batch, semi-batch and continuous, as well as their mathematical modeling. The work intendeds to estimate the parameter values of two biological reaction model. For that, simulated results, where only one output variable, the produced biogas, is known, are fitted to the model results. For this reason, the problems associated with reverse optimization are studied, using some graphics that provide clues to the sensitivity and identifiability associated with the problem. Particular solutions obtained by the identifiability analysis using GENSSI and DAISY softwares are also presented. Finally, the optimization is performed using genetic algorithms. During this optimization the need to improve the convergence of genetic algorithms was felt. This need has led to the development of an adaptation of the genetic algorithms, which we called Neighbored Genetic Algorithms (NGA1 and NGA2). In order to understand if this new approach overcomes the Basic Genetic Algorithms (BGA) and achieves the proposed goals, a study of 100 full optimization runs for each situation was further developed. Results show that NGA1 and NGA2 are statistically better than BGA. However, because it was not possible to obtain consistent results, the Nealder-Mead method was used, where the initial guesses were the estimated results from GA; Algoritmos Evolucionários para a Modelação de Bioreactores Resumo: Neste trabalho procura-se estudar os algoritmos genéticos com aplicação na modelação da digestão anaeróbia e, em particular, quando se utilizam modelos dinâmicos. Ao longo do mesmo, são apresentados diferentes tipos de bioreactores, como os batch, semi-batch e contínuos, bem como a modelação matemática dos mesmos. Neste trabalho procurou-se estimar o valor dos parâmetros que constam num modelo de digestão anaeróbia para o ajustar a uma situação simulada onde apenas se conhece uma variável de output, o biogas produzido. São ainda estudados os problemas associados à optimização inversa com recurso a alguns gráficos que fornecem pistas sobre a sensibilidade e identifiacabilidade associadas ao problema da modelação da digestão anaeróbia. São ainda apresentadas soluções particulares de idenficabilidade obtidas através dos softwares GENSSI e DAISY. Finalmente é realizada a optimização do modelo com recurso aos algoritmos genéticos. No decorrer dessa optimização sentiu-se a necessidade de melhorar a convergência e, portanto, desenvolveu-se ainda uma adaptação dos algoritmos genéticos a que se deu o nome de Neighboured Genetic Algorithms (NGA1 e NGA2). No sentido de se compreender se as adaptações permitiam superar os algoritmos genéticos básicos e atingir as metas propostas, foi ainda desenvolvido um estudo em que o processo de optimização foi realizado 100 vezes para cada um dos métodos, o que permitiu concluir, estatisticamente, que os BGA foram superados pelos NGA1 e NGA2. Ainda assim, porque não foi possivel obter consistência nos resultados, foi usado o método de Nealder-Mead utilizado como estimativa inicial os resultados obtidos pelos algoritmos genéticos.
Resumo:
Vaults are an architectural element which during construction history have been built with a great variety of different materials, shapes, and sizes. The shape of these structural elements was often dependent by the necessity to cover complex spaces, by the needed loading capacity, or by architectural aesthetics. Within this complex scenario masonry patterns generates also different effects on loading capacity, load percolation and stiffness of the structure. These effects were been extensively investigated, both with empirical observations and with modern numerical methods. While most of them focus on analyzing the load bearing capacity or the texture effect on vaulted structures, the aim of this analysis is to investigate on the effects of the variation of a single structural characteristic on the load percolation in the vault. Moreover, an additional purpose of the work is related to the coding of a parametrical model aiming at generating different masonry vaulted structures. Nevertheless, proposed script can generate different typology of vaulted structure basing on some structural characteristics, such as the span and the length to cover and the dimensions of the blocks.
Resumo:
In the framework of industrial problems, the application of Constrained Optimization is known to have overall very good modeling capability and performance and stands as one of the most powerful, explored, and exploited tool to address prescriptive tasks. The number of applications is huge, ranging from logistics to transportation, packing, production, telecommunication, scheduling, and much more. The main reason behind this success is to be found in the remarkable effort put in the last decades by the OR community to develop realistic models and devise exact or approximate methods to solve the largest variety of constrained or combinatorial optimization problems, together with the spread of computational power and easily accessible OR software and resources. On the other hand, the technological advancements lead to a data wealth never seen before and increasingly push towards methods able to extract useful knowledge from them; among the data-driven methods, Machine Learning techniques appear to be one of the most promising, thanks to its successes in domains like Image Recognition, Natural Language Processes and playing games, but also the amount of research involved. The purpose of the present research is to study how Machine Learning and Constrained Optimization can be used together to achieve systems able to leverage the strengths of both methods: this would open the way to exploiting decades of research on resolution techniques for COPs and constructing models able to adapt and learn from available data. In the first part of this work, we survey the existing techniques and classify them according to the type, method, or scope of the integration; subsequently, we introduce a novel and general algorithm devised to inject knowledge into learning models through constraints, Moving Target. In the last part of the thesis, two applications stemming from real-world projects and done in collaboration with Optit will be presented.
Resumo:
The coastal ocean is a complex environment with extremely dynamic processes that require a high-resolution and cross-scale modeling approach in which all hydrodynamic fields and scales are considered integral parts of the overall system. In the last decade, unstructured-grid models have been used to advance in seamless modeling between scales. On the other hand, the data assimilation methodologies to improve the unstructured-grid models in the coastal seas have been developed only recently and need significant advancements. Here, we link the unstructured-grid ocean modeling to the variational data assimilation methods. In particular, we show results from the modeling system SANIFS based on SHYFEM fully-baroclinic unstructured-grid model interfaced with OceanVar, a state-of-art variational data assimilation scheme adopted for several systems based on a structured grid. OceanVar implements a 3DVar DA scheme. The combination of three linear operators models the background error covariance matrix. The vertical part is represented using multivariate EOFs for temperature, salinity, and sea level anomaly. The horizontal part is assumed to be Gaussian isotropic and is modeled using a first-order recursive filter algorithm designed for structured and regular grids. Here we introduced a novel recursive filter algorithm for unstructured grids. A local hydrostatic adjustment scheme models the rapidly evolving part of the background error covariance. We designed two data assimilation experiments using SANIFS implementation interfaced with OceanVar over the period 2017-2018, one with only temperature and salinity assimilation by Argo profiles and the second also including sea level anomaly. The results showed a successful implementation of the approach and the added value of the assimilation for the active tracer fields. While looking at the broad basin, no significant improvements are highlighted for the sea level, requiring future investigations. Furthermore, a Machine Learning methodology based on an LSTM network has been used to predict the model SST increments.
Resumo:
The field of bioelectronics involves the use of electrodes to exchange electrical signals with biological systems for diagnostic and therapeutic purposes in biomedical devices and healthcare applications. However, the mechanical compatibility of implantable devices with the human body has been a challenge, particularly with long-term implantation into target organs. Current rigid bioelectronics can trigger inflammatory responses and cause unstable device functions due to the mechanical mismatch with the surrounding soft tissue. Recent advances in flexible and stretchable electronics have shown promise in making bioelectronic interfaces more biocompatible. To fully achieve this goal, material science and engineering of soft electronic devices must be combined with quantitative characterization and modeling tools to understand the mechanical issues at the interface between electronic technology and biological tissue. Local mechanical characterization is crucial to understand the activation of failure mechanisms and optimizing the devices. Experimental techniques for testing mechanical properties at the nanoscale are emerging, and the Atomic Force Microscope (AFM) is a good candidate for in situ local mechanical characterization of soft bioelectronic interfaces. In this work, in situ experimental techniques with solely AFM supported by interpretive models for the characterization of planar and three-dimensional devices suitable for in vivo and in vitro biomedical experimentations are reported. The combination of the proposed models and experimental techniques provides access to the local mechanical properties of soft bioelectronic interfaces. The study investigates the nanomechanics of hard thin gold films on soft polymeric substrates (Poly(dimethylsiloxane) PDMS) and 3D inkjet-printed micropillars under different deformation states. The proposed characterization methods provide a rapid and precise determination of mechanical properties, thus giving the possibility to parametrize the microfabrication steps and investigate their impact on the final device.
Assessing brain connectivity through electroencephalographic signal processing and modeling analysis
Resumo:
Brain functioning relies on the interaction of several neural populations connected through complex connectivity networks, enabling the transmission and integration of information. Recent advances in neuroimaging techniques, such as electroencephalography (EEG), have deepened our understanding of the reciprocal roles played by brain regions during cognitive processes. The underlying idea of this PhD research is that EEG-related functional connectivity (FC) changes in the brain may incorporate important neuromarkers of behavior and cognition, as well as brain disorders, even at subclinical levels. However, a complete understanding of the reliability of the wide range of existing connectivity estimation techniques is still lacking. The first part of this work addresses this limitation by employing Neural Mass Models (NMMs), which simulate EEG activity and offer a unique tool to study interconnected networks of brain regions in controlled conditions. NMMs were employed to test FC estimators like Transfer Entropy and Granger Causality in linear and nonlinear conditions. Results revealed that connectivity estimates reflect information transmission between brain regions, a quantity that can be significantly different from the connectivity strength, and that Granger causality outperforms the other estimators. A second objective of this thesis was to assess brain connectivity and network changes on EEG data reconstructed at the cortical level. Functional brain connectivity has been estimated through Granger Causality, in both temporal and spectral domains, with the following goals: a) detect task-dependent functional connectivity network changes, focusing on internal-external attention competition and fear conditioning and reversal; b) identify resting-state network alterations in a subclinical population with high autistic traits. Connectivity-based neuromarkers, compared to the canonical EEG analysis, can provide deeper insights into brain mechanisms and may drive future diagnostic methods and therapeutic interventions. However, further methodological studies are required to fully understand the accuracy and information captured by FC estimates, especially concerning nonlinear phenomena.
Resumo:
The discovery of new materials and their functions has always been a fundamental component of technological progress. Nowadays, the quest for new materials is stronger than ever: sustainability, medicine, robotics and electronics are all key assets which depend on the ability to create specifically tailored materials. However, designing materials with desired properties is a difficult task, and the complexity of the discipline makes it difficult to identify general criteria. While scientists developed a set of best practices (often based on experience and expertise), this is still a trial-and-error process. This becomes even more complex when dealing with advanced functional materials. Their properties depend on structural and morphological features, which in turn depend on fabrication procedures and environment, and subtle alterations leads to dramatically different results. Because of this, materials modeling and design is one of the most prolific research fields. Many techniques and instruments are continuously developed to enable new possibilities, both in the experimental and computational realms. Scientists strive to enforce cutting-edge technologies in order to make progress. However, the field is strongly affected by unorganized file management, proliferation of custom data formats and storage procedures, both in experimental and computational research. Results are difficult to find, interpret and re-use, and a huge amount of time is spent interpreting and re-organizing data. This also strongly limit the application of data-driven and machine learning techniques. This work introduces possible solutions to the problems described above. Specifically, it talks about developing features for specific classes of advanced materials and use them to train machine learning models and accelerate computational predictions for molecular compounds; developing method for organizing non homogeneous materials data; automate the process of using devices simulations to train machine learning models; dealing with scattered experimental data and use them to discover new patterns.
Resumo:
The design process of any electric vehicle system has to be oriented towards the best energy efficiency, together with the constraint of maintaining comfort in the vehicle cabin. Main aim of this study is to research the best thermal management solution in terms of HVAC efficiency without compromising occupant’s comfort and internal air quality. An Arduino controlled Low Cost System of Sensors was developed and compared against reference instrumentation (average R-squared of 0.92) and then used to characterise the vehicle cabin in real parking and driving conditions trials. Data on the energy use of the HVAC was retrieved from the car On-Board Diagnostic port. Energy savings using recirculation can reach 30 %, but pollutants concentration in the cabin builds up in this operating mode. Moreover, the temperature profile appeared strongly nonuniform with air temperature differences up to 10° C. Optimisation methods often require a high number of runs to find the optimal configuration of the system. Fast models proved to be beneficial for these task, while CFD-1D model are usually slower despite the higher level of detail provided. In this work, the collected dataset was used to train a fast ML model of both cabin and HVAC using linear regression. Average scaled RMSE over all trials is 0.4 %, while computation time is 0.0077 ms for each second of simulated time on a laptop computer. Finally, a reinforcement learning environment was built in OpenAI and Stable-Baselines3 using the built-in Proximal Policy Optimisation algorithm to update the policy and seek for the best compromise between comfort, air quality and energy reward terms. The learning curves show an oscillating behaviour overall, with only 2 experiments behaving as expected even if too slow. This result leaves large room for improvement, ranging from the reward function engineering to the expansion of the ML model.
Resumo:
The present Dissertation shows how recent statistical analysis tools and open datasets can be exploited to improve modelling accuracy in two distinct yet interconnected domains of flood hazard (FH) assessment. In the first Part, unsupervised artificial neural networks are employed as regional models for sub-daily rainfall extremes. The models aim to learn a robust relation to estimate locally the parameters of Gumbel distributions of extreme rainfall depths for any sub-daily duration (1-24h). The predictions depend on twenty morphoclimatic descriptors. A large study area in north-central Italy is adopted, where 2238 annual maximum series are available. Validation is performed over an independent set of 100 gauges. Our results show that multivariate ANNs may remarkably improve the estimation of percentiles relative to the benchmark approach from the literature, where Gumbel parameters depend on mean annual precipitation. Finally, we show that the very nature of the proposed ANN models makes them suitable for interpolating predicted sub-daily rainfall quantiles across space and time-aggregation intervals. In the second Part, decision trees are used to combine a selected blend of input geomorphic descriptors for predicting FH. Relative to existing DEM-based approaches, this method is innovative, as it relies on the combination of three characteristics: (1) simple multivariate models, (2) a set of exclusively DEM-based descriptors as input, and (3) an existing FH map as reference information. First, the methods are applied to northern Italy, represented with the MERIT DEM (∼90m resolution), and second, to the whole of Italy, represented with the EU-DEM (25m resolution). The results show that multivariate approaches may (a) significantly enhance flood-prone areas delineation relative to a selected univariate one, (b) provide accurate predictions of expected inundation depths, (c) produce encouraging results in extrapolation, (d) complete the information of imperfect reference maps, and (e) conveniently convert binary maps into continuous representation of FH.