885 resultados para Artificial Information Models
Resumo:
This paper addressed the problem of water-demand forecasting for real-time operation of water supply systems. The present study was conducted to identify the best fit model using hourly consumption data from the water supply system of Araraquara, Sa approximate to o Paulo, Brazil. Artificial neural networks (ANNs) were used in view of their enhanced capability to match or even improve on the regression model forecasts. The ANNs used were the multilayer perceptron with the back-propagation algorithm (MLP-BP), the dynamic neural network (DAN2), and two hybrid ANNs. The hybrid models used the error produced by the Fourier series forecasting as input to the MLP-BP and DAN2, called ANN-H and DAN2-H, respectively. The tested inputs for the neural network were selected literature and correlation analysis. The results from the hybrid models were promising, DAN2 performing better than the tested MLP-BP models. DAN2-H, identified as the best model, produced a mean absolute error (MAE) of 3.3 L/s and 2.8 L/s for training and test set, respectively, for the prediction of the next hour, which represented about 12% of the average consumption. The best forecasting model for the next 24 hours was again DAN2-H, which outperformed other compared models, and produced a MAE of 3.1 L/s and 3.0 L/s for training and test set respectively, which represented about 12% of average consumption. DOI: 10.1061/(ASCE)WR.1943-5452.0000177. (C) 2012 American Society of Civil Engineers.
Resumo:
Walking on irregular surfaces and in the presence of unexpected events is a challenging problem for bipedal machines. Up to date, their ability to cope with gait disturbances is far less successful than humans': Neither trajectory controlled robots, nor dynamic walking machines (Limit CycleWalkers) are able to handle them satisfactorily. On the contrary, humans reject gait perturbations naturally and efficiently relying on their sensory organs that, if needed, elicit a recovery action. A similar approach may be envisioned for bipedal robots and exoskeletons: An algorithm continuously observes the state of the walker and, if an unexpected event happens, triggers an adequate reaction. This paper presents a monitoring algorithm that provides immediate detection of any type of perturbation based solely on a phase representation of the normal walking of the robot. The proposed method was evaluated in a Limit Cycle Walker prototype that suffered push and trip perturbations at different moments of the gait cycle, providing 100% successful detections for the current experimental apparatus and adequately tuned parameters, with no false positives when the robot is walking unperturbed.
Resumo:
Over the last decade, Brazil has pioneered an innovative model of branchless banking, known as correspondent banking, involving distribution partnership between banks, several kinds of retailers and a variety of other participants, which have allowed an unprecedented growth in bank outreach and became a reference worldwide. However, despite the extensive number of studies recently developed focusing on Brazilian branchless banking, there exists a clear research gap in the literature. It is still necessary to identify the different business configurations involving network integration through which the branchless banking channel can be structured, as well as the way they relate to the range of bank services delivered. Given this gap, our objective is to investigate the relationship between network integration models and services delivered through the branchless banking channel. Based on twenty interviews with managers involved with the correspondent banking business and data collected on almost 300 correspondent locations, our research is developed in two steps. First, we created a qualitative taxonomy through which we identified three classes of network integration models. Second, we performed a cluster analysis to explain the groups of financial services that fit each model. By contextualizing correspondents' network integration processes through the lens of transaction costs economics, our results suggest that the more suited to deliver social-oriented, "pro-poor'' services the channel is, the more it is controlled by banks. This research offers contributions to managers and policy makers interested in understanding better how different correspondent banking configurations are related with specific portfolios of services. Researchers interested in the subject of branchless banking can also benefit from the taxonomy presented and the transaction costs analysis of this kind of banking channel, which has been adopted in a number of developing countries all over the world now. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
In this work, we study the performance evaluation of resource-aware business process models. We define a new framework that allows the generation of analytical models for performance evaluation from business process models annotated with resource management information. This framework is composed of a new notation that allows the specification of resource management constraints and a method to convert a business process specification and its resource constraints into Stochastic Automata Networks (SANs). We show that the analysis of the generated SAN model provides several performance indices, such as average throughput of the system, average waiting time, average queues size, and utilization rate of resources. Using the BP2SAN tool - our implementation of the proposed framework - and a SAN solver (such as the PEPS tool) we show through a simple use-case how a business specialist with no skills in stochastic modeling can easily obtain performance indices that, in turn, can help to identify bottlenecks on the model, to perform workload characterization, to define the provisioning of resources, and to study other performance related aspects of the business process.
Resumo:
Abstract Background Accurate malaria diagnosis is mandatory for the treatment and management of severe cases. Moreover, individuals with asymptomatic malaria are not usually screened by health care facilities, which further complicates disease control efforts. The present study compared the performances of a malaria rapid diagnosis test (RDT), the thick blood smear method and nested PCR for the diagnosis of symptomatic malaria in the Brazilian Amazon. In addition, an innovative computational approach was tested for the diagnosis of asymptomatic malaria. Methods The study was divided in two parts. For the first part, passive case detection was performed in 311 individuals with malaria-related symptoms from a recently urbanized community in the Brazilian Amazon. A cross-sectional investigation compared the diagnostic performance of the RDT Optimal-IT, nested PCR and light microscopy. The second part of the study involved active case detection of asymptomatic malaria in 380 individuals from riverine communities in Rondônia, Brazil. The performances of microscopy, nested PCR and an expert computational system based on artificial neural networks (MalDANN) using epidemiological data were compared. Results Nested PCR was shown to be the gold standard for diagnosis of both symptomatic and asymptomatic malaria because it detected the major number of cases and presented the maximum specificity. Surprisingly, the RDT was superior to microscopy in the diagnosis of cases with low parasitaemia. Nevertheless, RDT could not discriminate the Plasmodium species in 12 cases of mixed infections (Plasmodium vivax + Plasmodium falciparum). Moreover, the microscopy presented low performance in the detection of asymptomatic cases (61.25% of correct diagnoses). The MalDANN system using epidemiological data was worse that the light microscopy (56% of correct diagnoses). However, when information regarding plasma levels of interleukin-10 and interferon-gamma were inputted, the MalDANN performance sensibly increased (80% correct diagnoses). Conclusions An RDT for malaria diagnosis may find a promising use in the Brazilian Amazon integrating a rational diagnostic approach. Despite the low performance of the MalDANN test using solely epidemiological data, an approach based on neural networks may be feasible in cases where simpler methods for discriminating individuals below and above threshold cytokine levels are available.
Resumo:
Abstract Background To understand the molecular mechanisms underlying important biological processes, a detailed description of the gene products networks involved is required. In order to define and understand such molecular networks, some statistical methods are proposed in the literature to estimate gene regulatory networks from time-series microarray data. However, several problems still need to be overcome. Firstly, information flow need to be inferred, in addition to the correlation between genes. Secondly, we usually try to identify large networks from a large number of genes (parameters) originating from a smaller number of microarray experiments (samples). Due to this situation, which is rather frequent in Bioinformatics, it is difficult to perform statistical tests using methods that model large gene-gene networks. In addition, most of the models are based on dimension reduction using clustering techniques, therefore, the resulting network is not a gene-gene network but a module-module network. Here, we present the Sparse Vector Autoregressive model as a solution to these problems. Results We have applied the Sparse Vector Autoregressive model to estimate gene regulatory networks based on gene expression profiles obtained from time-series microarray experiments. Through extensive simulations, by applying the SVAR method to artificial regulatory networks, we show that SVAR can infer true positive edges even under conditions in which the number of samples is smaller than the number of genes. Moreover, it is possible to control for false positives, a significant advantage when compared to other methods described in the literature, which are based on ranks or score functions. By applying SVAR to actual HeLa cell cycle gene expression data, we were able to identify well known transcription factor targets. Conclusion The proposed SVAR method is able to model gene regulatory networks in frequent situations in which the number of samples is lower than the number of genes, making it possible to naturally infer partial Granger causalities without any a priori information. In addition, we present a statistical test to control the false discovery rate, which was not previously possible using other gene regulatory network models.
Resumo:
This study compared dentine demineralization induced by in vitro and in situ models, and correlated dentine surface hardness (SH), cross-sectional hardness (CSH) and mineral content by transverse microradiography (TMR). Bovine dentine specimens (n = 15/group) were demineralized in vitro with the following: MC gel (6% carboxymethylcellulose gel and 0.1 m lactic acid, pH 5.0, 14 days); buffer I (0.05 m acetic acid solution with calcium, phosphate and fluoride, pH 4.5, 7 days); buffer II (0.05 m acetic acid solution with calcium and phosphate, pH 5.0, 7 days), and TEMDP (0.05 m lactic acid with calcium, phosphate and tetraethyl methyl diphosphonate, pH 5.0, 7 days). In an in situ study, 11 volunteers wore palatal appliances containing 2 bovine dentine specimens, protected with a plastic mesh to allow biofilm development. The volunteers dripped a 20% sucrose solution on each specimen 4 times a day for 14 days. In vitro and in situ lesions were analyzed using TMR and statistically compared by ANOVA. TMR and CSH/SH were submitted to regression and correlation analysis (p < 0.05). The in situ model produced a deep lesion with a high R value, but with a thin surface layer. Regarding the in vitro models, MC gel produced only a shallow lesion, while buffers I and II as well as TEMDP induced a pronounced subsurface lesion with deep demineralization. The relationship between CSH and TMR was weak and not linear. The artificial dentine carious lesions induced by the different models differed significantly, which in turn might influence further de- and remineralization processes. Hardness analysis should not be interpreted with respect to dentine mineral loss
Resumo:
This paper shows theoretical models (analytical formulations) to predict the mechanical behavior of thick composite tubes and how some parameters can influence this behavior. Thus, firstly, it was developed the analytical formulations for a pressurized tube made of composite material with a single thick ply and only one lamination angle. For this case, the stress distribution and the displacement fields are investigated as function of different lamination angles and reinforcement volume fractions. The results obtained by the theoretical model are physic consistent and coherent with the literature information. After that, the previous formulations are extended in order to predict the mechanical behavior of a thick laminated tube. Both analytical formulations are implemented as a computational tool via Matlab code. The results obtained by the computational tool are compared to the finite element analyses, and the stress distribution is considered coherent. Moreover, the engineering computational tool is used to perform failure analysis, using different types of failure criteria, which identifies the damaged ply and the mode of failure.
Resumo:
We consider the Shannon mutual information of subsystems of critical quantum chains in their ground states. Our results indicate a universal leading behavior for large subsystem sizes. Moreover, as happens with the entanglement entropy, its finite-size behavior yields the conformal anomaly c of the underlying conformal field theory governing the long-distance physics of the quantum chain. We study analytically a chain of coupled harmonic oscillators and numerically the Q-state Potts models (Q = 2, 3, and 4), the XXZ quantum chain, and the spin-1 Fateev-Zamolodchikov model. The Shannon mutual information is a quantity easily computed, and our results indicate that for relatively small lattice sizes, its finite-size behavior already detects the universality class of quantum critical behavior.
Resumo:
Background. The surgical treatment of dysfunctional hips is a severe condition for the patient and a costly therapy for the public health. Hip resurfacing techniques seem to hold the promise of various advantages over traditional THR, with particular attention to young and active patients. Although the lesson provided in the past by many branches of engineering is that success in designing competitive products can be achieved only by predicting the possible scenario of failure, to date the understanding of the implant quality is poorly pre-clinically addressed. Thus revision is the only delayed and reliable end point for assessment. The aim of the present work was to model the musculoskeletal system so as to develop a protocol for predicting failure of hip resurfacing prosthesis. Methods. Preliminary studies validated the technique for the generation of subject specific finite element (FE) models of long bones from Computed Thomography data. The proposed protocol consisted in the numerical analysis of the prosthesis biomechanics by deterministic and statistic studies so as to assess the risk of biomechanical failure on the different operative conditions the implant might face in a population of interest during various activities of daily living. Physiological conditions were defined including the variability of the anatomy, bone densitometry, surgery uncertainties and published boundary conditions at the hip. The protocol was tested by analysing a successful design on the market and a new prototype of a resurfacing prosthesis. Results. The intrinsic accuracy of models on bone stress predictions (RMSE < 10%) was aligned to the current state of the art in this field. The accuracy of prediction on the bone-prosthesis contact mechanics was also excellent (< 0.001 mm). The sensitivity of models prediction to uncertainties on modelling parameter was found below 8.4%. The analysis of the successful design resulted in a very good agreement with published retrospective studies. The geometry optimisation of the new prototype lead to a final design with a low risk of failure. The statistical analysis confirmed the minimal risk of the optimised design over the entire population of interest. The performances of the optimised design showed a significant improvement with respect to the first prototype (+35%). Limitations. On the authors opinion the major limitation of this study is on boundary conditions. The muscular forces and the hip joint reaction were derived from the few data available in the literature, which can be considered significant but hardly representative of the entire variability of boundary conditions the implant might face over the patients population. This moved the focus of the research on modelling the musculoskeletal system; the ongoing activity is to develop subject-specific musculoskeletal models of the lower limb from medical images. Conclusions. The developed protocol was able to accurately predict known clinical outcomes when applied to a well-established device and, to support the design optimisation phase providing important information on critical characteristics of the patients when applied to a new prosthesis. The presented approach does have a relevant generality that would allow the extension of the protocol to a large set of orthopaedic scenarios with minor changes. Hence, a failure mode analysis criterion can be considered a suitable tool in developing new orthopaedic devices.
Resumo:
[EN]This paper describes a wildfi re forecasting application based on a 3D virtual environment and a fi re simulation engine. A novel open source framework is presented for the development of 3D graphics applications over large geographic areas, off ering high performance 3D visualization and powerful interaction tools for the Geographic Information Systems (GIS) community. The application includes a remote module that allows simultaneous connection of several users for monitoring a real wildfi re event.
Resumo:
L’uso frequente dei modelli predittivi per l’analisi di sistemi complessi, naturali o artificiali, sta cambiando il tradizionale approccio alle problematiche ambientali e di rischio. Il continuo miglioramento delle capacità di elaborazione dei computer facilita l’utilizzo e la risoluzione di metodi numerici basati su una discretizzazione spazio-temporale che permette una modellizzazione predittiva di sistemi reali complessi, riproducendo l’evoluzione dei loro patterns spaziali ed calcolando il grado di precisione della simulazione. In questa tesi presentiamo una applicazione di differenti metodi predittivi (Geomatico, Reti Neurali, Land Cover Modeler e Dinamica EGO) in un’area test del Petén, Guatemala. Durante gli ultimi decenni questa regione, inclusa nella Riserva di Biosfera Maya, ha conosciuto una rapida crescita demografica ed un’incontrollata pressione sulle sue risorse naturali. L’area test puó essere suddivisa in sotto-regioni caratterizzate da differenti dinamiche di uso del suolo. Comprendere e quantificare queste differenze permette una migliore approssimazione del sistema reale; é inoltre necessario integrare tutti i parametri fisici e socio-economici, per una rappresentazione più completa della complessità dell’impatto antropico. Data l’assenza di informazioni dettagliate sull’area di studio, quasi tutti i dati sono stati ricavati dall’elaborazione di 11 immagini ETM+, TM e SPOT; abbiamo poi realizzato un’analisi multitemporale dei cambi uso del suolo passati e costruito l’input per alimentare i modelli predittivi. I dati del 1998 e 2000 sono stati usati per la fase di calibrazione per simulare i cambiamenti nella copertura terrestre del 2003, scelta come data di riferimento per la validazione dei risultati. Quest’ultima permette di evidenziare le qualità ed i limiti per ogni modello nelle differenti sub-regioni.
Resumo:
Trabajo Fin de Grado de la doble titulación de Grado en Ingeniería Informática y Grado en Administración y Dirección de Empresas.
Resumo:
The motivation for the work presented in this thesis is to retrieve profile information for the atmospheric trace constituents nitrogen dioxide (NO2) and ozone (O3) in the lower troposphere from remote sensing measurements. The remote sensing technique used, referred to as Multiple AXis Differential Optical Absorption Spectroscopy (MAX-DOAS), is a recent technique that represents a significant advance on the well-established DOAS, especially for what it concerns the study of tropospheric trace consituents. NO2 is an important trace gas in the lower troposphere due to the fact that it is involved in the production of tropospheric ozone; ozone and nitrogen dioxide are key factors in determining the quality of air with consequences, for example, on human health and the growth of vegetation. To understand the NO2 and ozone chemistry in more detail not only the concentrations at ground but also the acquisition of the vertical distribution is necessary. In fact, the budget of nitrogen oxides and ozone in the atmosphere is determined both by local emissions and non-local chemical and dynamical processes (i.e. diffusion and transport at various scales) that greatly impact on their vertical and temporal distribution: thus a tool to resolve the vertical profile information is really important. Useful measurement techniques for atmospheric trace species should fulfill at least two main requirements. First, they must be sufficiently sensitive to detect the species under consideration at their ambient concentration levels. Second, they must be specific, which means that the results of the measurement of a particular species must be neither positively nor negatively influenced by any other trace species simultaneously present in the probed volume of air. Air monitoring by spectroscopic techniques has proven to be a very useful tool to fulfill these desirable requirements as well as a number of other important properties. During the last decades, many such instruments have been developed which are based on the absorption properties of the constituents in various regions of the electromagnetic spectrum, ranging from the far infrared to the ultraviolet. Among them, Differential Optical Absorption Spectroscopy (DOAS) has played an important role. DOAS is an established remote sensing technique for atmospheric trace gases probing, which identifies and quantifies the trace gases in the atmosphere taking advantage of their molecular absorption structures in the near UV and visible wavelengths of the electromagnetic spectrum (from 0.25 μm to 0.75 μm). Passive DOAS, in particular, can detect the presence of a trace gas in terms of its integrated concentration over the atmospheric path from the sun to the receiver (the so called slant column density). The receiver can be located at ground, as well as on board an aircraft or a satellite platform. Passive DOAS has, therefore, a flexible measurement configuration that allows multiple applications. The ability to properly interpret passive DOAS measurements of atmospheric constituents depends crucially on how well the optical path of light collected by the system is understood. This is because the final product of DOAS is the concentration of a particular species integrated along the path that radiation covers in the atmosphere. This path is not known a priori and can only be evaluated by Radiative Transfer Models (RTMs). These models are used to calculate the so called vertical column density of a given trace gas, which is obtained by dividing the measured slant column density to the so called air mass factor, which is used to quantify the enhancement of the light path length within the absorber layers. In the case of the standard DOAS set-up, in which radiation is collected along the vertical direction (zenith-sky DOAS), calculations of the air mass factor have been made using “simple” single scattering radiative transfer models. This configuration has its highest sensitivity in the stratosphere, in particular during twilight. This is the result of the large enhancement in stratospheric light path at dawn and dusk combined with a relatively short tropospheric path. In order to increase the sensitivity of the instrument towards tropospheric signals, measurements with the telescope pointing the horizon (offaxis DOAS) have to be performed. In this circumstances, the light path in the lower layers can become very long and necessitate the use of radiative transfer models including multiple scattering, the full treatment of atmospheric sphericity and refraction. In this thesis, a recent development in the well-established DOAS technique is described, referred to as Multiple AXis Differential Optical Absorption Spectroscopy (MAX-DOAS). The MAX-DOAS consists in the simultaneous use of several off-axis directions near the horizon: using this configuration, not only the sensitivity to tropospheric trace gases is greatly improved, but vertical profile information can also be retrieved by combining the simultaneous off-axis measurements with sophisticated RTM calculations and inversion techniques. In particular there is a need for a RTM which is capable of dealing with all the processes intervening along the light path, supporting all DOAS geometries used, and treating multiple scattering events with varying phase functions involved. To achieve these multiple goals a statistical approach based on the Monte Carlo technique should be used. A Monte Carlo RTM generates an ensemble of random photon paths between the light source and the detector, and uses these paths to reconstruct a remote sensing measurement. Within the present study, the Monte Carlo radiative transfer model PROMSAR (PROcessing of Multi-Scattered Atmospheric Radiation) has been developed and used to correctly interpret the slant column densities obtained from MAX-DOAS measurements. In order to derive the vertical concentration profile of a trace gas from its slant column measurement, the AMF is only one part in the quantitative retrieval process. One indispensable requirement is a robust approach to invert the measurements and obtain the unknown concentrations, the air mass factors being known. For this purpose, in the present thesis, we have used the Chahine relaxation method. Ground-based Multiple AXis DOAS, combined with appropriate radiative transfer models and inversion techniques, is a promising tool for atmospheric studies in the lower troposphere and boundary layer, including the retrieval of profile information with a good degree of vertical resolution. This thesis has presented an application of this powerful comprehensive tool for the study of a preserved natural Mediterranean area (the Castel Porziano Estate, located 20 km South-West of Rome) where pollution is transported from remote sources. Application of this tool in densely populated or industrial areas is beginning to look particularly fruitful and represents an important subject for future studies.
Resumo:
This thesis deals with an investigation of combinatorial and robust optimisation models to solve railway problems. Railway applications represent a challenging area for operations research. In fact, most problems in this context can be modelled as combinatorial optimisation problems, in which the number of feasible solutions is finite. Yet, despite the astonishing success in the field of combinatorial optimisation, the current state of algorithmic research faces severe difficulties with highly-complex and data-intensive applications such as those dealing with optimisation issues in large-scale transportation networks. One of the main issues concerns imperfect information. The idea of Robust Optimisation, as a way to represent and handle mathematically systems with not precisely known data, dates back to 1970s. Unfortunately, none of those techniques proved to be successfully applicable in one of the most complex and largest in scale (transportation) settings: that of railway systems. Railway optimisation deals with planning and scheduling problems over several time horizons. Disturbances are inevitable and severely affect the planning process. Here we focus on two compelling aspects of planning: robust planning and online (real-time) planning.