836 resultados para Data fusion applications
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Concept drift, which refers to non stationary learning problems over time, has increasing importance in machine learning and data mining. Many concept drift applications require fast response, which means an algorithm must always be (re)trained with the latest available data. But the process of data labeling is usually expensive and/or time consuming when compared to acquisition of unlabeled data, thus usually only a small fraction of the incoming data may be effectively labeled. Semi-supervised learning methods may help in this scenario, as they use both labeled and unlabeled data in the training process. However, most of them are based on assumptions that the data is static. Therefore, semi-supervised learning with concept drifts is still an open challenging task in machine learning. Recently, a particle competition and cooperation approach has been developed to realize graph-based semi-supervised learning from static data. We have extend that approach to handle data streams and concept drift. The result is a passive algorithm which uses a single classifier approach, naturally adapted to concept changes without any explicit drift detection mechanism. It has built-in mechanisms that provide a natural way of learning from new data, gradually "forgetting" older knowledge as older data items are no longer useful for the classification of newer data items. The proposed algorithm is applied to the KDD Cup 1999 Data of network intrusion, showing its effectiveness.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Faced with an imminent restructuring of the electric power system, over the past few years many countries have invested in a new paradigm known as Smart Grid. This paradigm targets optimization and automation of electric power network, using advanced information and communication technologies. Among the main communication protocols for Smart Grids we have the DNP3 protocol, which provides secure data transmission with moderate rates. The IEEE 802.15.4 is another communication protocol also widely used in Smart Grid, especially in the so-called Home Area Network (HAN). Thus, many applications of Smart Grid depends on the interaction of these two protocols. This paper proposes modeling, in the traditional network simulator NS-2, the integration of DNP3 protocol and the IEEE 802.15.4 wireless standard for low cost simulations of Smart Grid applications.
Resumo:
This work reports the experimental evaluation of physical and gas permeation parameters of four spinel-based investments developed with or without inclusion of sacrificial fillers. Data were compared with those of three commercial formulations. Airflow tests were conducted from 27 to 546°C, and permeability coefficients were fitted from Forchheimer's equation. Skeletal densities found for spinel- (ρs = 3635 ± 165 kg/m3) and phosphate-bonded (ρs = 2686 ± 11 kg/m3) samples were in agreement with the literature. The developed investments were more porous and less permeable than commercial brands, and the differences were ascribed to the different pore morphologies and hydraulic pore sizes of ceramic matrices. The inclusion of both fibers and microbeads resulted in increases of total porosity (42.6–56.6%) and of Darcian permeability coefficient k1 (0.76 × 10−14–7.03 × 10−14 m2). Air permeation was hindered by increasing flow temperatures, and the effect was related to the influence of gas viscosity on ΔP, in accordance with Darcy's law. Casting quality with molten titanium (CP Ti) was directly proportional to the permeability level of the spinel-based investments. However, the high reactivity of the silica-based investment RP and the formation of α-case during casting hindered the benefits of the highest permeability level of this commercial brand.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
A CMOS/SOI circuit to decode PWM signals is presented as part of a body-implanted neurostimulator for visual prosthesis. Since encoded data is the sole input to the circuit, the decoding technique is based on a double-integration concept and does not require dc filtering. Nonoverlapping control phases are internally derived from the incoming pulses and a fast-settling comparator ensures good discrimination accuracy in the megahertz range. The circuit was integrated on a 2 mu m single-metal SOI fabrication process and has an effective area of 2mm(2) Typically, the measured resolution of encoding parameter a was better than 10% at 6MHz and V-DD=3.3V. Stand-by consumption is around 340 mu W. Pulses with frequencies up to 15MHz and alpha = 10% can be discriminated for V-DD spanning from 2.3V to 3.3V. Such an excellent immunity to V-DD deviations meets a design specification with respect to inherent coupling losses on transmitting data and power by means of a transcutaneous link.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Bromelain is a cysteine protease found in pineapple tissue. Due to its anti-inflammatory and anti-cancer activities, as well as its ability to induce apoptotic cell death, bromelain has proved useful in several therapeutic areas. The market for this protease is growing, and several studies exploring various properties of this molecule have been reported. This review aims to compile this data, and summarize the main findings on bromelain in the literature to date. The physicochemical properties and stability of bromelain under different conditions are discussed. Several studies on the purification of bromelain from crude extracts using a wide range of techniques such as liquid-liquid extractions by aqueous two-phase system, ultrafiltration, precipitation, and chromatography, have been reported. Finally, the various applications of bromelain are presented. This review therefore covers the main properties of bromelain, aiming to provide an up-to-date compilation of the data reported on this enzyme.
Resumo:
Most authors struggle to pick a title that adequately conveys all of the material covered in a book. When I first saw Applied Spatial Data Analysis with R, I expected a review of spatial statistical models and their applications in packages (libraries) from the CRAN site of R. The authors’ title is not misleading, but I was very pleasantly surprised by how deep the word “applied” is here. The first half of the book essentially covers how R handles spatial data. To some statisticians this may be boring. Do you want, or need, to know the difference between S3 and S4 classes, how spatial objects in R are organized, and how various methods work on the spatial objects? A few years ago I would have said “no,” especially to the “want” part. Just let me slap my EXCEL spreadsheet into R and run some spatial functions on it. Unfortunately, the world is not so simple, and ultimately we want to minimize effort to get all of our spatial analyses accomplished. The first half of this book certainly convinced me that some extra effort in organizing my data into certain spatial class structures makes the analysis easier and less subject to mistakes. I also admit that I found it very interesting and I learned a lot.
Resumo:
End-user programmers are increasingly relying on web authoring environments to create web applications. Although often consisting primarily of web pages, such applications are increasingly going further, harnessing the content available on the web through “programs” that query other web applications for information to drive other tasks. Unfortunately, errors can be pervasive in web applications, impacting their dependability. This paper reports the results of an exploratory study of end-user web application developers, performed with the aim of exposing prevalent classes of errors. The results suggest that end-users struggle the most with the identification and manipulation of variables when structuring requests to obtain data from other web sites. To address this problem, we present a family of techniques that help end user programmers perform this task, reducing possible sources of error. The techniques focus on simplification and characterization of the data that end-users must analyze while developing their web applications. We report the results of an empirical study in which these techniques are applied to several popular web sites. Our results reveal several potential benefits for end-users who wish to “engineer” dependable web applications.
Resumo:
Technical evaluation of analytical data is of extreme relevance considering it can be used for comparisons with environmental quality standards and decision-making as related to the management of disposal of dredged sediments and the evaluation of salt and brackish water quality in accordance with CONAMA 357/05 Resolution. It is, therefore, essential that the project manager discusses the environmental agency`s technical requirements with the laboratory contracted for the follow-up of the analysis underway and even with a view to possible re-analysis when anomalous data are identified. The main technical requirements are: (1) method quantitation limits (QLs) should fall below environmental standards; (2) analyses should be carried out in laboratories whose analytical scope is accredited by the National Institute of Metrology (INMETRO) or qualified or accepted by a licensing agency; (3) chain of custody should be provided in order to ensure sample traceability; (4) control charts should be provided to prove method performance; (5) certified reference material analysis or, if that is not available, matrix spike analysis, should be undertaken and (6) chromatograms should be included in the analytical report. Within this context and with a view to helping environmental managers in analytical report evaluation, this work has as objectives the discussion of the limitations of the application of SW 846 US EPA methods to marine samples, the consequences of having data based on method detection limits (MDL) and not sample quantitation limits (SQL), and present possible modifications of the principal method applied by laboratories in order to comply with environmental quality standards.
Resumo:
In [1], the authors proposed a framework for automated clustering and visualization of biological data sets named AUTO-HDS. This letter is intended to complement that framework by showing that it is possible to get rid of a user-defined parameter in a way that the clustering stage can be implemented more accurately while having reduced computational complexity