892 resultados para EXPLOITING MULTICOMMUTATION
Resumo:
A densely built environment is a complex system of infrastructure, nature, and people closely interconnected and interacting. Vehicles, public transport, weather action, and sports activities constitute a manifold set of excitation and degradation sources for civil structures. In this context, operators should consider different factors in a holistic approach for assessing the structural health state. Vibration-based structural health monitoring (SHM) has demonstrated great potential as a decision-supporting tool to schedule maintenance interventions. However, most excitation sources are considered an issue for practical SHM applications since traditional methods are typically based on strict assumptions on input stationarity. Last-generation low-cost sensors present limitations related to a modest sensitivity and high noise floor compared to traditional instrumentation. If these devices are used for SHM in urban scenarios, short vibration recordings collected during high-intensity events and vehicle passage may be the only available datasets with a sufficient signal-to-noise ratio. While researchers have spent efforts to mitigate the effects of short-term phenomena in vibration-based SHM, the ultimate goal of this thesis is to exploit them and obtain valuable information on the structural health state. First, this thesis proposes strategies and algorithms for smart sensors operating individually or in a distributed computing framework to identify damage-sensitive features based on instantaneous modal parameters and influence lines. Ordinary traffic and people activities become essential sources of excitation, while human-powered vehicles, instrumented with smartphones, take the role of roving sensors in crowdsourced monitoring strategies. The technical and computational apparatus is optimized using in-memory computing technologies. Moreover, identifying additional local features can be particularly useful to support the damage assessment of complex structures. Thereby, smart coatings are studied to enable the self-sensing properties of ordinary structural elements. In this context, a machine-learning-aided tomography method is proposed to interpret the data provided by a nanocomposite paint interrogated electrically.
Resumo:
Whole Exome Sequencing (WES) is rapidly becoming the first-tier test in clinics, both thanks to its declining costs and the development of new platforms that help clinicians in the analysis and interpretation of SNV and InDels. However, we still know very little on how CNV detection could increase WES diagnostic yield. A plethora of exome CNV callers have been published over the years, all showing good performances towards specific CNV classes and sizes, suggesting that the combination of multiple tools is needed to obtain an overall good detection performance. Here we present TrainX, a ML-based method for calling heterozygous CNVs in WES data using EXCAVATOR2 Normalized Read Counts. We select males and females’ non pseudo-autosomal chromosome X alignments to construct our dataset and train our model, make predictions on autosomes target regions and use HMM to call CNVs. We compared TrainX against a set of CNV tools differing for the detection method (GATK4 gCNV, ExomeDepth, DECoN, CNVkit and EXCAVATOR2) and found that our algorithm outperformed them in terms of stability, as we identified both deletions and duplications with good scores (0.87 and 0.82 F1-scores respectively) and for sizes reaching the minimum resolution of 2 target regions. We also evaluated the method robustness using a set of WES and SNP array data (n=251), part of the Italian cohort of Epi25 collaborative, and were able to retrieve all clinical CNVs previously identified by the SNP array. TrainX showed good accuracy in detecting heterozygous CNVs of different sizes, making it a promising tool to use in a diagnostic setting.
Resumo:
The accurate representation of the Earth Radiation Budget by General Circulation Models (GCMs) is a fundamental requirement to provide reliable historical and future climate simulations. In this study, we found reasonable agreement between the integrated energy fluxes at the top of the atmosphere simulated by 34 state-of-the-art climate models and the observations provided by the Cloud and Earth Radiant Energy System (CERES) mission on a global scale, but large regional biases have been detected throughout the globe. Furthermore, we highlighted that a good agreement between simulated and observed integrated Outgoing Longwave Radiation (OLR) fluxes may be obtained from the cancellation of opposite-in-sign systematic errors, localized in different spectral ranges. To avoid this and to understand the causes of these biases, we compared the observed Earth emission spectra, measured by the Infrared Atmospheric Sounding Interferometer (IASI) in the period 2008-2016, with the synthetic radiances computed on the basis of the atmospheric fields provided by the EC-Earth GCM. To this purpose, the fast σ-IASI radiative transfer model was used, after its validation and implementation in EC-Earth. From the comparison between observed and simulated spectral radiances, a positive temperature bias in the stratosphere and a negative temperature bias in the middle troposphere, as well as a dry bias of the water vapor concentration in the upper troposphere, have been identified in the EC-Earth climate model. The analysis has been performed in clear-sky conditions, but the feasibility of its extension in the presence of clouds, whose impact on the radiation represents the greatest source of uncertainty in climate models, has also been proven. Finally, the analysis of simulated and observed OLR trends indicated good agreement and provided detailed information on the spectral fingerprints of the evolution of the main climate variables.
Resumo:
The cation chloride cotransporters (CCCs) represent a vital family of ion transporters, with several members implicated in significant neurological disorders. Specifically, conditions such as cerebrospinal fluid accumulation, epilepsy, Down’s syndrome, Asperger’s syndrome, and certain cancers have been attributed to various CCCs. This thesis delves into these pharmacological targets using advanced computational methodologies. I primarily employed GPU-accelerated all-atom molecular dynamics simulations, deep learning-based collective variables, enhanced sampling methods, and custom Python scripts for comprehensive simulation analyses. Our research predominantly centered on KCC1 and NKCC1 transporters. For KCC1, I examined its equilibrium dynamics in the presence/absence of an inhibitor and assessed the functional implications of different ion loading states. In contrast, our work on NKCC1 revealed its unique alternating access mechanism, termed the rocking-bundle mechanism. I identified a previously unobserved occluded state and demonstrated the transporter's potential for water permeability under specific conditions. Furthermore, I confirmed the actual water flow through its permeable states. In essence, this thesis leverages cutting-edge computational techniques to deepen our understanding of the CCCs, a family of ion transporters with profound clinical significance.
Resumo:
The investigations of the large-scale structure of our Universe provide us with extremely powerful tools to shed light on some of the open issues of the currently accepted Standard Cosmological Model. Until recently, constraining the cosmological parameters from cosmic voids was almost infeasible, because the amount of data in void catalogues was not enough to ensure statistically relevant samples. The increasingly wide and deep fields in present and upcoming surveys have made the cosmic voids become promising probes, despite the fact that we are not yet provided with a unique and generally accepted definition for them. In this Thesis we address the two-point statistics of cosmic voids, in the very first attempt to model its features with cosmological purposes. To this end, we implement an improved version of the void power spectrum presented by Chan et al. (2014). We have been able to build up an exceptionally robust method to tackle with the void clustering statistics, by proposing a functional form that is entirely based on first principles. We extract our data from a suite of high-resolution N-body simulations both in the LCDM and alternative modified gravity scenarios. To accurately compare the data to the theory, we calibrate the model by accounting for a free parameter in the void radius that enters the theory of void exclusion. We then constrain the cosmological parameters by means of a Bayesian analysis. As far as the modified gravity effects are limited, our model is a reliable method to constrain the main LCDM parameters. By contrast, it cannot be used to model the void clustering in the presence of stronger modification of gravity. In future works, we will further develop our analysis on the void clustering statistics, by testing our model on large and high-resolution simulations and on real data, also addressing the void clustering in the halo distribution. Finally, we also plan to combine these constraints with those of other cosmological probes.
Resumo:
This dissertation analyzes the exploitation of the orbital angular momentum (OAM) of the electromagnetic waves with large intelligent surfaces in the near-field region and line-of-sight conditions, in light of the holographic MIMO communication concept. Firstly, a characterization of the OAM-based communication problem is presented, and the relationship between OAM-carrying waves and communication modes is discussed. Then, practicable strategies for OAM detection using large intelligent surfaces and optimization methods based on beam focusing are proposed. Numerical results characterize the effectiveness of OAM with respect to other strategies, also including the proposed detection and optimization methods. It is shown that OAM waves constitute a particular choice of communication modes, i.e., an alternative basis set, which is sub-optimum with respect to optimal basis functions that can be derived by solving eigenfunction problems. Moreover, even the joint utilization of OAM waves with focusing strategies led to the conclusion that no channel capacity achievements can be obtained with these transmission techniques.
Resumo:
The transport system is one of the most important components to be chosen in the design of an automatic machine. There is a wide variety of different choices that can be made in picking this element, each one having its own strengths and its own drawbacks. If it is desired to obtain some elaborate behaviour from the transport system, it is a good idea to think about some flexible and advanced solutions. Among these transport systems, the newest is the Beckhoff XPlanar. This transport system exploits magnetic levitation to move some passive magnetic movers on a completely customizable plane, in an entirely contact-free way. This provides a fast, clean, and noiseless motion, which is extremely desirable in a modern automatic machine. The purpose of this Thesis is to analyse the potentialities and the problems of this new device, starting from the basics. After having presented in detail the topic, an analysis on the hardware components needed to build this system is performed. Then, it is conducted a study on the concepts needed to know how to build a controller having the purpose of dealing with this system. After that, the various types of motion are studied and executed and, later on, some experiments on the real kit are carried out. These studies start from the diagnostic and involve other analyses that are used to test the limits of this transport system. In performing these analyses, it is noticed how the kit presents some problems in reaching the limits of the dynamics. Finally, two different types of station cycle are implemented, which are useful to get a rough idea on the potentialities of this new advanced transport system.
Resumo:
Universidade Estadual de Campinas . Faculdade de Educação Física
Resumo:
A new flow procedure based on multicommutation with chemiluminometric detection was developed to quantify gentamicin sulphate in pharmaceutical formulations. This approach is based on gentamicin's ability to inhibit the chemiluminometric reaction between luminol and hypochlorite in alkaline medium, causing a decrease in the analytical signal. The inhibition of the analytical signal is proportional to the concentration of gentamicin sulphate, within a linear range of 1 to 4 mu g mL(-1) with a coefficient variation <3%. A sample throughput of 55 samples h(-1) was obtained. The developed method is sensitive, simple, with low reagent consumption, reproducible, and inexpensive, and when applied to the analysis of pharmaceutical formulations (eye drops and injections) it gave results with RSD between 1.10 and 4.40%.
Resumo:
Using differential x-ray absorption spectroscopy (DiffXAS) we have measured and quantified the intrinsic, atomic-scale magnetostriction of Fe(81)Ga(19). By exploiting the chemical selectivity of DiffXAS, the Fe and Ga local environments have been assessed individually. The enhanced magnetostriction induced by the addition of Ga to Fe was found to originate from the Ga environment, where lambda(gamma,2)(approximate to (3/2)lambda(100)) is 390 +/- 40 ppm. In this environment, < 001 > Ga-Ga pair defects were found to exist, which mediate the magnetostriction by inducing large strains in the surrounding Ga-Fe bonds. For the first time, intrinsic, chemically selective magnetostrictive strain has been measured and quantified at the atomic level, allowing true comparison with theory.
Resumo:
Due to both the widespread and multipurpose use of document images and the current availability of a high number of document images repositories, robust information retrieval mechanisms and systems have been increasingly demanded. This paper presents an approach to support the automatic generation of relationships among document images by exploiting Latent Semantic Indexing (LSI) and Optical Character Recognition (OCR). We developed the LinkDI (Linking of Document Images) service, which extracts and indexes document images content, computes its latent semantics, and defines relationships among images as hyperlinks. LinkDI was experimented with document images repositories, and its performance was evaluated by comparing the quality of the relationships created among textual documents as well as among their respective document images. Considering those same document images, we ran further experiments in order to compare the performance of LinkDI when it exploits or not the LSI technique. Experimental results showed that LSI can mitigate the effects of usual OCR misrecognition, which reinforces the feasibility of LinkDI relating OCR output with high degradation.
Resumo:
Many natural populations exploiting a wide range of resources are actually composed of relatively specialized individuals. This interindividual variation is thought to be a consequence of the invasion of `empty` niches in depauperate communities, generally in temperate regions. If individual niches are constrained by functional trade-offs, the expansion of the population niche is only achieved by an increase in interindividual variation, consistent with the `niche variation hypothesis`. According to this hypothesis, we should not expect interindividual variation in species belonging to highly diverse, packed communities. In the present study, we measured the degree of interindividual diet variation in four species of frogs of the highly diverse Brazilian Cerrado, using both gut contents and delta(13)C stable isotopes. We found evidence of significant diet variation in the four species, indicating that this phenomenon is not restricted to depauperate communities in temperate regions. The lack of correlations between the frogs` morphology and diet indicate that trade-offs do not depend on the morphological characters measured here and are probably not biomechanical. The nature of the trade-offs remains unknown, but are likely to be cognitive or physiological. Finally, we found a positive correlation between the population niche width and the degree of diet variation, but a null model showed that this correlation can be generated by individuals sampling randomly from a common set of resources. Therefore, albeit consistent with, our results cannot be taken as evidence in favour of the niche variation hypothesis.
Resumo:
A flow system designed with solenoid micro-pumps is proposed for fast and greener spectrophotometric determination of free glycerol in biodiesel. Glycerol was extracted from samples without using organic solvents. The determination involves glycerol oxidation by periodate, yielding formaldehyde followed by formation of the colored (3,5-diacetil-1,4-dihidrolutidine) product upon reaction with acetylacetone. The coefficient of variation, sampling rate and detection limit were estimated as 1.5% (20.0 mg L(-1) glycerol, n =10), 34 h(-1), and 1.0 mg L(-1) (99.7% confidence level), respectively. A linear response was observed from 5 to 50 mg L(-1), with reagent consumption estimated as 345 mu g of KIO(4) and 15 mg of acetylacetone per determination. The procedure was successfully applied to the analysis of biodiesel samples and the results agreed with the batch reference method at the 95% confidence level. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
A green and highly sensitive analytical procedure was developed for the determination of free chlorine in natural waters, based on the reaction with N,N-diethyl-p-phenylenediamine (DPD). The flow system was designed with solenoid micro-pumps in order to improve mixing conditions by pulsed flows and to minimize reagent consumption as well as waste generation. A 100-cm optical path flow cell based on a liquid core waveguide was employed to increase sensitivity. A linear response was observed within the range 10.0 to 100.0 mu g L(-1), with the detection limit, coefficient of variation and sampling rate estimated as 6.8 mu g (99.7% confidence level), 0.9% (n = 20) and 60 determinations per hour, respectively. The consumption of the most toxic reagent (DPD) was reduced 20,000-fold and 30-fold in comparison to the batch method and flow injection with continuous reagent addition, respectively. The results for natural and tap water samples agreed with those obtained by the reference batch spectrophotometric procedure at the 95% confidence level. (C) 2010 Elsevier By. All rights reserved.
Resumo:
The determination of uric acid in urine shows clinical importance, once it can be related to human organism dysfunctions, such as gout. An analytical procedure employing a multicommuted flow system was developed for the determination of uric acid in urine samples. Cu(II) ions are reduced by uric acid to Cu(I) that can be quantified by spectrophotometry in the presence of 2,2`-biquinoline 4,4`-dicarboxylic acid (BCA). The analytical response was linear between 10 and 100 mu mol L(-1) uric acid with a detection limit of 3.0 mu mol L(-1) (99.7% confidence level). Coefficient of variation of 1.2% and sampling rate of 150 determinations per hour were achieved. Per determination, 32 mu g of CuSO(4) and 200 mu g of BCA were consumed, generating 2.0 mL of waste. Recoveries from 91 to 112% were estimated and the results for 7 urine samples agreed with those obtained by the commercially available enzymatic kit for determination of uric acid. The procedure required 100-fold dilution of urine samples, minimizing sample consumption and interfering effects. In order to avoid the manual dilution step, on-line sample dilution was achieved by a simple system reconfiguration attaining a sampling rate of 95 h(-1). (C) 2009 Elsevier B.V. All rights reserved.