915 resultados para Automated estimator
Resumo:
Bahadur representation and its applications have attracted a large number of publications and presentations on a wide variety of problems. Mixing dependency is weak enough to describe the dependent structure of random variables, including observations in time series and longitudinal studies. This note proves the Bahadur representation of sample quantiles for strongly mixing random variables (including ½-mixing and Á-mixing) under very weak mixing coe±cients. As application, the asymptotic normality is derived. These results greatly improves those recently reported in literature.
Resumo:
International audience
Resumo:
The Exhibitium Project , awarded by the BBVA Foundation, is a data-driven project developed by an international consortium of research groups . One of its main objectives is to build a prototype that will serve as a base to produce a platform for the recording and exploitation of data about art-exhibitions available on the Internet . Therefore, our proposal aims to expose the methods, procedures and decision-making processes that have governed the technological implementation of this prototype, especially with regard to the reuse of WordPress (WP) as development framework.
Resumo:
This dissertation describes the development of a label-free, electrochemical immunosensing platform integrated into a low-cost microfluidic system for the sensitive, selective and accurate detection of cortisol, a steroid hormone co-related with many physiological disorders. Abnormal levels of cortisol is indicative of conditions such as Cushing’s syndrome, Addison’s disease, adrenal insufficiencies and more recently post-traumatic stress disorder (PTSD). Electrochemical detection of immuno-complex formation is utilized for the sensitive detection of Cortisol using Anti-Cortisol antibodies immobilized on sensing electrodes. Electrochemical detection techniques such as cyclic voltammetry (CV) and electrochemical impedance spectroscopy (EIS) have been utilized for the characterization and sensing of the label-free detection of Cortisol. The utilization of nanomaterial’s as the immobilizing matrix for Anti-cortisol antibodies that leads to improved sensor response has been explored. A hybrid nano-composite of Polyanaline-Ag/AgO film has been fabricated onto Au substrate using electrophoretic deposition for the preparation of electrochemical immunosening of cortisol. Using a conventional 3-electrode electrochemical cell, a linear sensing range of 1pM to 1µM at a sensitivity of 66µA/M and detection limit of 0.64pg/mL has been demonstrated for detection of cortisol. Alternately, a self-assembled monolayer (SAM) of dithiobis(succinimidylpropionte) (DTSP) has been fabricated for the modification of sensing electrode to immobilize with Anti-Cortisol antibodies. To increase the sensitivity at lower detection limit and to develop a point-of-care sensing platform, the DTSP-SAM has been fabricated on micromachined interdigitated microelectrodes (µIDE). Detection of cortisol is demonstrated at a sensitivity of 20.7µA/M and detection limit of 10pg/mL for a linear sensing range of 10pM to 200nM using the µIDE’s. A simple, low-cost microfluidic system is designed using low-temperature co-fired ceramics (LTCC) technology for the integration of the electrochemical cortisol immunosensor and automation of the immunoassay. For the first time, the non-specific adsorption of analyte on LTCC has been characterized for microfluidic applications. The design, fabrication technique and fluidic characterization of the immunoassay are presented. The DTSP-SAM based electrochemical immunosensor on µIDE is integrated into the LTCC microfluidic system and cortisol detection is achieved in the microfluidic system in a fully automated assay. The fully automated microfluidic immunosensor hold great promise for accurate, sensitive detection of cortisol in point-of-care applications.
Resumo:
One of the main unresolved questions in science is how non-living matter became alive in a process known as abiognesis, which aims to explain how from a primordial soup scenario containing simple molecules, by following a ``bottom up'' approach, complex biomolecules emerged forming the first living system, known as a protocell. A protocell is defined by the interplay of three sub-systems which are considered requirements for life: information molecules, metabolism, and compartmentalization. This thesis investigates the role of compartmentalization during the emergence of life, and how simple membrane aggregates could evolve into entities that were able to develop ``life-like'' behaviours, and in particular how such evolution could happen without the presence of information molecules. Our ultimate objective is to create an autonomous evolvable system, and in order tp do so we will try to engineer life following a ``top-down'' approach, where an initial platform capable of evolving chemistry will be constructed, but the chemistry being dependent on the robotic adjunct, and how then this platform can be de-constructed in iterative operations until it is fully disconnected from the evolvable system, the system then being inherently autonomous. The first project of this thesis describes how the initial platform was designed and built. The platform was based on the model of a standard liquid handling robot, with the main difference with respect to other similar robots being that we used a 3D-printer in order to prototype the robot and build its main equipment, like a liquid dispensing system, tool movement mechanism, and washing procedures. The robot was able to mix different components and create populations of droplets in a Petri dish filled with aqueous phase. The Petri dish was then observed by a camera, which analysed the behaviours described by the droplets and fed this information back to the robot. Using this loop, the robot was then able to implement an evolutionary algorithm, where populations of droplets were evolved towards defined life-like behaviours. The second project of this thesis aimed to remove as many mechanical parts as possible from the robot while keeping the evolvable chemistry intact. In order to do so, we encapsulated the functionalities of the previous liquid handling robot into a single monolithic 3D-printed device. This device was able to mix different components, generate populations of droplets in an aqueous phase, and was also equipped with a camera in order to analyse the experiments. Moreover, because the full fabrication process of the devices happened in a 3D-printer, we were also able to alter its experimental arena by adding different obstacles where to evolve the droplets, enabling us to study how environmental changes can shape evolution. By doing so, we were able to embody evolutionary characteristics into our device, removing constraints from the physical platform, and taking one step forward to a possible autonomous evolvable system.
Resumo:
2008
Resumo:
Objectives CO2-EVAR was proposed for treatment of AAA especially in patients with CKD. Issues regarding standardization, such as visualization of lowest renal artery (LoRA) and quality image in angiographies performed from pigtail or introducer-sheath, are still unsolved. Aim of the study was to analyze different steps of CO2-EVAR to create an operative protocol to standardize the procedure. Methods Patients undergoing CO2-EVAR were prospectively enrolled in 5 European centers (2018-2021). CO2-EVAR was performed using an automated injector. LoRA visualization and image quality (1-4) were analyzed and compared at different procedure steps: preoperative CO2-angiography from Pigtail/Introducer-sheath (1st Step), angiographies from Pigtail at 0%,50%,100% main body (MB) deployment (2nd Step), contralateral hypogastric artery (CHA) visualization with CO2 injection from femoral Introducer-sheath (3rd Step) and completion angiogram from Pigtail/Introducer-sheath (4th Step). Intra-/postoperative adverse events were evaluated. Results Sixty-five patients undergoing CO2-EVAR were enrolled, 55/65(84.5%) male, median age 75(11.5) years. Median ICM was 20(54)cc; 19/65(29.2%) procedures were performed with 0-iodine. 1st Step: median image quality was significantly higher with CO2 injected from femoral introducer [Pigtail2(3)vs.3(3)Introducer,p=.008]. 2nd Step: LoRA was more frequently detected at 50% (93%vs.73.2%, p=.002) and 100% (94.1%vs.78.4%, p=.01) of MB deployment compared with first angiography from Pigtail; image quality was significantly higher at 50% [3(3)vs.2(3),p=<.001] and 100% [4(3) vs.2(3),p=.001] of MB deployment. CHA was detected in 93% cases (3rd Step). Mean image quality was significantly higher when final angiogram (4th Step) was performed from introducer (Pigtail2.6±1.1vs.3.1±0.9Introducer,p=<.001). Rates of intra-/postoperative adverse events (pain,vomit,diarrhea) were 7.7% and 12.5%. Conclusions Preimplant CO2-angiography should be performed from Introducer-sheath. MB steric bulk during its deployment should be used to improve image quality and LoRA visualization with CO2. CHA can be satisfactorily visualized with CO2. Completion CO2-angiogram should be performed from femoral Introducer-sheath. This operative protocol allows to perform CO2-EVAR with minimal ICM and low rate of mild complications.
Resumo:
Modern networks are undergoing a fast and drastic evolution, with software taking a more predominant role. Virtualization and cloud-like approaches are replacing physical network appliances, reducing the management burden of the operators. Furthermore, networks now expose programmable interfaces for fast and dynamic control over traffic forwarding. This evolution is backed by standard organizations such as ETSI, 3GPP, and IETF. This thesis will describe which are the main trends in this evolution. Then, it will present solutions developed during the three years of Ph.D. to exploit the capabilities these new technologies offer and to study their possible limitations to push further the state-of-the-art. Namely, it will deal with programmable network infrastructure, introducing the concept of Service Function Chaining (SFC) and presenting two possible solutions, one with Openstack and OpenFlow and the other using Segment Routing and IPv6. Then, it will continue with network service provisioning, presenting concepts from Network Function Virtualization (NFV) and Multi-access Edge Computing (MEC). These concepts will be applied to network slicing for mission-critical communications and Industrial IoT (IIoT). Finally, it will deal with network abstraction, with a focus on Intent Based Networking (IBN). To summarize, the thesis will include solutions for data plane programming with evaluation on well-known platforms, performance metrics on virtual resource allocations, novel practical application of network slicing on mission-critical communications, an architectural proposal and its implementation for edge technologies in Industrial IoT scenarios, and a formal definition of intent using a category theory approach.
Resumo:
In recent years, IoT technology has radically transformed many crucial industrial and service sectors such as healthcare. The multi-facets heterogeneity of the devices and the collected information provides important opportunities to develop innovative systems and services. However, the ubiquitous presence of data silos and the poor semantic interoperability in the IoT landscape constitute a significant obstacle in the pursuit of this goal. Moreover, achieving actionable knowledge from the collected data requires IoT information sources to be analysed using appropriate artificial intelligence techniques such as automated reasoning. In this thesis work, Semantic Web technologies have been investigated as an approach to address both the data integration and reasoning aspect in modern IoT systems. In particular, the contributions presented in this thesis are the following: (1) the IoT Fitness Ontology, an OWL ontology that has been developed in order to overcome the issue of data silos and enable semantic interoperability in the IoT fitness domain; (2) a Linked Open Data web portal for collecting and sharing IoT health datasets with the research community; (3) a novel methodology for embedding knowledge in rule-defined IoT smart home scenarios; and (4) a knowledge-based IoT home automation system that supports a seamless integration of heterogeneous devices and data sources.
Resumo:
Nowadays, technological advancements have brought industry and research towards the automation of various processes. Automation brings a reduction in costs and an improvement in product quality. For this reason, companies are pushing research to investigate new technologies. The agriculture industry has always looked towards automating various processes, from product processing to storage. In the last years, the automation of harvest and cultivation phases also has become attractive, pushed by the advancement of autonomous driving. Nevertheless, ADAS systems are not enough. Merging different technologies will be the solution to obtain total automation of agriculture processes. For example, sensors that estimate products' physical and chemical properties can be used to evaluate the maturation level of fruit. Therefore, the fusion of these technologies has a key role in industrial process automation. In this dissertation, ADAS systems and sensors for precision agriculture will be both treated. Several measurement procedures for characterizing commercial 3D LiDARs will be proposed and tested to cope with the growing need for comparison tools. Axial errors and transversal errors have been investigated. Moreover, a measurement method and setup for evaluating the fog effect on 3D LiDARs will be proposed. Each presented measurement procedure has been tested. The obtained results highlight the versatility and the goodness of the proposed approaches. Regarding the precision agriculture sensors, a measurement approach for the Moisture Content and density estimation of crop directly on the field is presented. The approach regards the employment of a Near Infrared spectrometer jointly with Partial Least Square statistical analysis. The approach and the model will be described together with a first laboratory prototype used to evaluate the NIRS approach. Finally, a prototype for on the field analysis is realized and tested. The test results are promising, evidencing that the proposed approach is suitable for Moisture Content and density estimation.
Resumo:
Day by day, machine learning is changing our lives in ways we could not have imagined just 5 years ago. ML expertise is more and more requested and needed, though just a limited number of ML engineers are available on the job market, and their knowledge is always limited by an inherent characteristic of theirs: they are humans. This thesis explores the possibilities offered by meta-learning, a new field in ML that takes learning a level higher: models are trained on other models' training data, starting from features of the dataset they were trained on, inference times, obtained performances, to try to understand the relationship between a good model and the way it was obtained. The so-called metamodel was trained on data collected by OpenML, the largest ML metadata platform that's publicly available today. Datasets were analyzed to obtain meta-features that describe them, which were then tied to model performances in a regression task. The obtained metamodel predicts the expected performances of a given model type (e.g., a random forest) on a given ML task (e.g., classification on the UCI census dataset). This research was then integrated into a custom-made AutoML framework, to show how meta-learning is not an end in itself, but it can be used to further progress our ML research. Encoding ML engineering expertise in a model allows better, faster, and more impactful ML applications across the whole world, while reducing the cost that is inevitably tied to human engineers.
Resumo:
Diabetic Retinopathy (DR) is a complication of diabetes that can lead to blindness if not readily discovered. Automated screening algorithms have the potential to improve identification of patients who need further medical attention. However, the identification of lesions must be accurate to be useful for clinical application. The bag-of-visual-words (BoVW) algorithm employs a maximum-margin classifier in a flexible framework that is able to detect the most common DR-related lesions such as microaneurysms, cotton-wool spots and hard exudates. BoVW allows to bypass the need for pre- and post-processing of the retinographic images, as well as the need of specific ad hoc techniques for identification of each type of lesion. An extensive evaluation of the BoVW model, using three large retinograph datasets (DR1, DR2 and Messidor) with different resolution and collected by different healthcare personnel, was performed. The results demonstrate that the BoVW classification approach can identify different lesions within an image without having to utilize different algorithms for each lesion reducing processing time and providing a more flexible diagnostic system. Our BoVW scheme is based on sparse low-level feature detection with a Speeded-Up Robust Features (SURF) local descriptor, and mid-level features based on semi-soft coding with max pooling. The best BoVW representation for retinal image classification was an area under the receiver operating characteristic curve (AUC-ROC) of 97.8% (exudates) and 93.5% (red lesions), applying a cross-dataset validation protocol. To assess the accuracy for detecting cases that require referral within one year, the sparse extraction technique associated with semi-soft coding and max pooling obtained an AUC of 94.2 ± 2.0%, outperforming current methods. Those results indicate that, for retinal image classification tasks in clinical practice, BoVW is equal and, in some instances, surpasses results obtained using dense detection (widely believed to be the best choice in many vision problems) for the low-level descriptors.
Resumo:
To evaluate associations between polymorphisms of the N-acetyltransferase 2 (NAT2), human 8-oxoguanine glycosylase 1 (hOGG1) and X-ray repair cross-complementing protein 1 (XRCC1) genes and risk of upper aerodigestive tract (UADT) cancer. A case-control study involving 117 cases and 224 controls was undertaken. The NAT2 gene polymorphisms were genotyped by automated sequencing and XRCC1 Arg399Gln and hOGG1 Ser326Cys polymorphisms were determined by Polymerase Chain Reaction followed by Restriction Fragment Length Polymorphism (PCR-RFLP) methods. Slow metabolization phenotype was significantly associated as a risk factor for the development of UADT cancer (p=0.038). Furthermore, haplotype of slow metabolization was also associated with UADT cancer (p=0.014). The hOGG1 Ser326Cys polymorphism (CG or GG vs. CC genotypes) was shown as a protective factor against UADT cancer in moderate smokers (p=0.031). The XRCC1 Arg399Gln polymorphism (GA or AA vs. GG genotypes), in turn, was a protective factor against UADT cancer only among never-drinkers (p=0.048). Interactions involving NAT2, XRCC1 Arg399Gln and hOGG1 Ser326Cys polymorphisms may modulate the risk of UADT cancer in this population.
Resumo:
The aim was to describe the outcome of neonatal hearing screening (NHS) and audiological diagnosis in neonates in the NICU. The sample was divided into Group I: neonates who underwent NHS in one step and Group II: neonates who underwent a test and retest NHS. NHS procedure was automated auditory brainstem response. NHS was performed in 82.1% of surviving neonates. For GI, referral rate was 18.6% and false-positive was 62.2% (normal hearing in the diagnostic stage). In GII, with retest, referral rate dropped to 4.1% and false-positive to 12.5%. Sensorineural hearing loss was found in 13.2% of infants and conductive in 26.4% of cases. There was one case of auditory neuropathy spectrum (1.9%). Dropout rate in whole process was 21.7% for GI and 24.03% for GII. We concluded that it was not possible to perform universal NHS in the studied sample or, in many cases, to apply it within the first month of life. Retest reduced failure and false-positive rate and did not increase evasion, indicating that it is a recommendable step in NHS programs in the NICU. The incidence of hearing loss was 2.9%, considering sensorineural hearing loss (0.91%), conductive (1.83%) and auditory neuropathy spectrum (0.19%).
Resumo:
The aim of this study was to develop a methodology using Raman hyperspectral imaging and chemometric methods for identification of pre- and post-blast explosive residues on banknote surfaces. The explosives studied were of military, commercial and propellant uses. After the acquisition of the hyperspectral imaging, independent component analysis (ICA) was applied to extract the pure spectra and the distribution of the corresponding image constituents. The performance of the methodology was evaluated by the explained variance and the lack of fit of the models, by comparing the ICA recovered spectra with the reference spectra using correlation coefficients and by the presence of rotational ambiguity in the ICA solutions. The methodology was applied to forensic samples to solve an automated teller machine explosion case. Independent component analysis proved to be a suitable method of resolving curves, achieving equivalent performance with the multivariate curve resolution with alternating least squares (MCR-ALS) method. At low concentrations, MCR-ALS presents some limitations, as it did not provide the correct solution. The detection limit of the methodology presented in this study was 50μgcm(-2).