843 resultados para “Hybrid” implementation model
Resumo:
Localization and Mapping are two of the most important capabilities for autonomous mobile robots and have been receiving considerable attention from the scientific computing community over the last 10 years. One of the most efficient methods to address these problems is based on the use of the Extended Kalman Filter (EKF). The EKF simultaneously estimates a model of the environment (map) and the position of the robot based on odometric and exteroceptive sensor information. As this algorithm demands a considerable amount of computation, it is usually executed on high end PCs coupled to the robot. In this work we present an FPGA-based architecture for the EKF algorithm that is capable of processing two-dimensional maps containing up to 1.8 k features at real time (14 Hz), a three-fold improvement over a Pentium M 1.6 GHz, and a 13-fold improvement over an ARM920T 200 MHz. The proposed architecture also consumes only 1.3% of the Pentium and 12.3% of the ARM energy per feature.
Resumo:
Polysilsesquioxanes containing methacrylate pendant groups were prepared by the sol-gel process through hydrolysis and condensation of (3-methacryloxypropyl)trimethoxysilane (MPTS) dissolved in a methanol/methyl methacrylate (MMA) mixture. The effects of different water, MMA, and methanol contents, as well as of pH, on the nanoscopic and local structures of the system, at advanced stages of the condensation reaction, were studied by small-angle X-ray scattering (SAXS) and (29)Si nuclear magnetic resonance (NMR) spectroscopy, respectively. SAXS results indicate that the nanoscopic features of the hybrid sol could be described by a hierarchical model composed of two levels, namely (i) silsesquioxane (SSQO) nanoparticles Surrounded by the methacrylate pendant groups and the methanol/MMA mixture. and (ii) aggregation zones or islands containing correlated SSQO nanoparticles, embedded in the liquid medium. The (29)Si NMR results Show that the inner Structures of SSQO nanoparticles produced at pH 1 and 3 were built Up of polyhedral structures. mainly cagelike octamers and small linear oligomers, respectively. Irrespective of MMA and methanol contents, for a [H(2)O]/[MPTS] ratio higher than or equal to 1, the SSQO nailoparticles produced at pH I exhibit an average condensation degree (CD approximate to 69-87%) and average radius of gyration (R(g) approximate to 2.5 angstrom) larger than those produced at pH 3 (CD approximate to 48-67% and R(g) approximate to 1.5 angstrom). Methanol appears to act as a redispersion agent, by decreasing the number of particles inside the aggregation zones, while the addition of MMA induces a swelling of the aggregation zones.
Resumo:
The Bullough-Dodd model is an important two-dimensional integrable field theory which finds applications in physics and geometry. We consider a conformally invariant extension of it, and study its integrability properties using a zero curvature condition based on the twisted Kac-Moody algebra A(2)((2)). The one- and two-soliton solutions as well as the breathers are constructed explicitly. We also consider integrable extensions of the Bullough-Dodd model by the introduction of spinor (matter) fields. The resulting theories are conformally invariant and present local internal symmetries. All the one-soliton solutions, for two examples of those models, are constructed using a hybrid of the dressing and Hirota methods. One model is of particular interest because it presents a confinement mechanism for a given conserved charge inside the solitons. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
In this article, we discuss inferential aspects of the measurement error regression models with null intercepts when the unknown quantity x (latent variable) follows a skew normal distribution. We examine first the maximum-likelihood approach to estimation via the EM algorithm by exploring statistical properties of the model considered. Then, the marginal likelihood, the score function and the observed information matrix of the observed quantities are presented allowing direct inference implementation. In order to discuss some diagnostics techniques in this type of models, we derive the appropriate matrices to assessing the local influence on the parameter estimates under different perturbation schemes. The results and methods developed in this paper are illustrated considering part of a real data set used by Hadgu and Koch [1999, Application of generalized estimating equations to a dental randomized clinical trial. Journal of Biopharmaceutical Statistics, 9, 161-178].
Resumo:
The Birnbaum-Saunders (BS) model is a positively skewed statistical distribution that has received great attention in recent decades. A generalized version of this model was derived based on symmetrical distributions in the real line named the generalized BS (GBS) distribution. The R package named gbs was developed to analyze data from GBS models. This package contains probabilistic and reliability indicators and random number generators from GBS distributions. Parameter estimates for censored and uncensored data can also be obtained by means of likelihood methods from the gbs package. Goodness-of-fit and diagnostic methods were also implemented in this package in order to check the suitability of the GBS models. in this article, the capabilities and features of the gbs package are illustrated by using simulated and real data sets. Shape and reliability analyses for GBS models are presented. A simulation study for evaluating the quality and sensitivity of the estimation method developed in the package is provided and discussed. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
Intermolecular associations between a cationic lipid and two model polymers were evaluated from preparation and characterization of hybrid thin films cast on silicon wafers. The novel materials were prepared by spin-coating of a chloroformic solution of lipid and polymer on silicon wafer. Polymers tested for miscibility with the cationic lipid dioctadecyldimethylammonium bromide (DODAB) were polystyrene (PS) and poly(methyl methacrylate) (PMMA). The films thus obtained were characterized by ellipsometry, wettability, optical and atomic force microscopy, Fourier transform infrared spectroscopy (FTIR), differential scanning calorimetry (DSC), and activity against Escherichia coli. Whereas intermolecular ion-dipole interactions were available for the PMMA-DODAB interacting pair producing smooth PMMA-DODAB films, the absence of such interactions for PS-DODAB films caused lipid segregation, poor film stability (detachment from the silicon wafer) and large rugosity. In addition, the well-established but still remarkable antimicrobial DODAB properties were transferred to the novel hybrid PMMA/DODAB coating, which is demonstrated to be highly effective against E. coli.
Resumo:
This presentation will outline an effective model for a Hybrid Statistics course. The course continues to be very successful, incorporating on-line instruction, testing, blogs, and above all, a data analysis project driven trajectory motivating students to engage more aggressively in the class and rise up to the challenge of writing an original research paper. Obstacles, benefits and successes of this endeavor will be addressed.
Resumo:
The Intelligent Algorithm is designed for theusing a Battery source. The main function is to automate the Hybrid System through anintelligent Algorithm so that it takes the decision according to the environmental conditionsfor utilizing the Photovoltaic/Solar Energy and in the absence of this, Fuel Cell energy isused. To enhance the performance of the Fuel Cell and Photovoltaic Cell we used batterybank which acts like a buffer and supply the current continuous to the load. To develop the main System whlogic based controller was used. Fuzzy Logic based controller used to develop this system,because they are chosen to be feasible for both controlling the decision process and predictingthe availability of the available energy on the basis of current Photovoltaic and Battery conditions. The Intelligent Algorithm is designed to optimize the performance of the system and to selectthe best available energy source(s) in regard of the input parameters. The enhance function of these Intelligent Controller is to predict the use of available energy resources and turn on thatparticular source for efficient energy utilization. A fuzzy controller was chosen to take thedecisions for the efficient energy utilization from the given resources. The fuzzy logic basedcontroller is designed in the Matlab-Simulink environment. Initially, the fuzzy based ruleswere built. Then MATLAB based simulation system was designed and implemented. Thenthis whole proposed model is simulated and tested for the accuracy of design and performanceof the system.
Resumo:
The specification of Quality of Service (QoS) constraints over software design requires measures that ensure such requirements are met by the delivered product. Achieving this goal is non-trivial, as it involves, at least, identifying how QoS constraint specifications should be checked at the runtime. In this paper we present an implementation of a Model Driven Architecture (MDA) based framework for the runtime monitoring of QoS properties. We incorporate the UML2 superstructure and the UML profile for Quality of Service to provide abstract descriptions of component-and-connector systems. We then define transformations that refine the UML2 models to conform with the Distributed Management Taskforce (DMTF) Common Information Model (CIM) (Distributed Management Task Force Inc. 2006), a schema standard for management and instrumentation of hardware and software. Finally, we provide a mapping the CIM metamodel to a .NET-based metamodel for implementation of the monitoring infrastructure utilising various .NET features including the Windows Management Instrumentation (WMI) interface.
Resumo:
This project constructs a structural model of the United States Economy. This task is tackled in two separate ways: first econometric methods and then using a neural network, both with a structure that mimics the structure of the U.S. economy. The structural model tracks the performance of U.S. GDP rather well in a dynamic simulation, with an average error of just over 1 percent. The neural network performed well, but suffered from some theoretical, as well as some implementation issues.
Resumo:
The reliable evaluation of the flood forecasting is a crucial problem for assessing flood risk and consequent damages. Different hydrological models (distributed, semi-distributed or lumped) have been proposed in order to deal with this issue. The choice of the proper model structure has been investigated by many authors and it is one of the main sources of uncertainty for a correct evaluation of the outflow hydrograph. In addition, the recent increasing of data availability makes possible to update hydrological models as response of real-time observations. For these reasons, the aim of this work it is to evaluate the effect of different structure of a semi-distributed hydrological model in the assimilation of distributed uncertain discharge observations. The study was applied to the Bacchiglione catchment, located in Italy. The first methodological step was to divide the basin in different sub-basins according to topographic characteristics. Secondly, two different structures of the semi-distributed hydrological model were implemented in order to estimate the outflow hydrograph. Then, synthetic observations of uncertain value of discharge were generated, as a function of the observed and simulated value of flow at the basin outlet, and assimilated in the semi-distributed models using a Kalman Filter. Finally, different spatial patterns of sensors location were assumed to update the model state as response of the uncertain discharge observations. The results of this work pointed out that, overall, the assimilation of uncertain observations can improve the hydrologic model performance. In particular, it was found that the model structure is an important factor, of difficult characterization, since can induce different forecasts in terms of outflow discharge. This study is partly supported by the FP7 EU Project WeSenseIt.
Resumo:
The Short-term Water Information and Forecasting Tools (SWIFT) is a suite of tools for flood and short-term streamflow forecasting, consisting of a collection of hydrologic model components and utilities. Catchments are modeled using conceptual subareas and a node-link structure for channel routing. The tools comprise modules for calibration, model state updating, output error correction, ensemble runs and data assimilation. Given the combinatorial nature of the modelling experiments and the sub-daily time steps typically used for simulations, the volume of model configurations and time series data is substantial and its management is not trivial. SWIFT is currently used mostly for research purposes but has also been used operationally, with intersecting but significantly different requirements. Early versions of SWIFT used mostly ad-hoc text files handled via Fortran code, with limited use of netCDF for time series data. The configuration and data handling modules have since been redesigned. The model configuration now follows a design where the data model is decoupled from the on-disk persistence mechanism. For research purposes the preferred on-disk format is JSON, to leverage numerous software libraries in a variety of languages, while retaining the legacy option of custom tab-separated text formats when it is a preferred access arrangement for the researcher. By decoupling data model and data persistence, it is much easier to interchangeably use for instance relational databases to provide stricter provenance and audit trail capabilities in an operational flood forecasting context. For the time series data, given the volume and required throughput, text based formats are usually inadequate. A schema derived from CF conventions has been designed to efficiently handle time series for SWIFT.
Resumo:
When an accurate hydraulic network model is available, direct modeling techniques are very straightforward and reliable for on-line leakage detection and localization applied to large class of water distribution networks. In general, this type of techniques based on analytical models can be seen as an application of the well-known fault detection and isolation theory for complex industrial systems. Nonetheless, the assumption of single leak scenarios is usually made considering a certain leak size pattern which may not hold in real applications. Upgrading a leak detection and localization method based on a direct modeling approach to handle multiple-leak scenarios can be, on one hand, quite straightforward but, on the other hand, highly computational demanding for large class of water distribution networks given the huge number of potential water loss hotspots. This paper presents a leakage detection and localization method suitable for multiple-leak scenarios and large class of water distribution networks. This method can be seen as an upgrade of the above mentioned method based on a direct modeling approach in which a global search method based on genetic algorithms has been integrated in order to estimate those network water loss hotspots and the size of the leaks. This is an inverse / direct modeling method which tries to take benefit from both approaches: on one hand, the exploration capability of genetic algorithms to estimate network water loss hotspots and the size of the leaks and on the other hand, the straightforwardness and reliability offered by the availability of an accurate hydraulic model to assess those close network areas around the estimated hotspots. The application of the resulting method in a DMA of the Barcelona water distribution network is provided and discussed. The obtained results show that leakage detection and localization under multiple-leak scenarios may be performed efficiently following an easy procedure.
Resumo:
The US term structure of interest rates plays a central role in fixed-income analysis. For example, estimating accurately the US term structure is a crucial step for those interested in analyzing Brazilian Brady bonds such as IDUs, DCBs, FLIRBs, EIs, etc. In this work we present a statistical model to estimate the US term structure of interest rates. We address in this report all major issues which drove us in the process of implementing the model developed, concentrating on important practical issues such as computational efficiency, robustness of the final implementation, the statistical properties of the final model, etc. Numerical examples are provided in order to illustrate the use of the model on a daily basis.
Resumo:
We study the joint determination of the lag length, the dimension of the cointegrating space and the rank of the matrix of short-run parameters of a vector autoregressive (VAR) model using model selection criteria. We consider model selection criteria which have data-dependent penalties for a lack of parsimony, as well as the traditional ones. We suggest a new procedure which is a hybrid of traditional criteria and criteria with data-dependant penalties. In order to compute the fit of each model, we propose an iterative procedure to compute the maximum likelihood estimates of parameters of a VAR model with short-run and long-run restrictions. Our Monte Carlo simulations measure the improvements in forecasting accuracy that can arise from the joint determination of lag-length and rank, relative to the commonly used procedure of selecting the lag-length only and then testing for cointegration.