969 resultados para on-disk data layout
Resumo:
Plane model extraction from three-dimensional point clouds is a necessary step in many different applications such as planar object reconstruction, indoor mapping and indoor localization. Different RANdom SAmple Consensus (RANSAC)-based methods have been proposed for this purpose in recent years. In this study, we propose a novel method-based on RANSAC called Multiplane Model Estimation, which can estimate multiple plane models simultaneously from a noisy point cloud using the knowledge extracted from a scene (or an object) in order to reconstruct it accurately. This method comprises two steps: first, it clusters the data into planar faces that preserve some constraints defined by knowledge related to the object (e.g., the angles between faces); and second, the models of the planes are estimated based on these data using a novel multi-constraint RANSAC. We performed experiments in the clustering and RANSAC stages, which showed that the proposed method performed better than state-of-the-art methods.
Resumo:
Sensing techniques are important for solving problems of uncertainty inherent to intelligent grasping tasks. The main goal here is to present a visual sensing system based on range imaging technology for robot manipulation of non-rigid objects. Our proposal provides a suitable visual perception system of complex grasping tasks to support a robot controller when other sensor systems, such as tactile and force, are not able to obtain useful data relevant to the grasping manipulation task. In particular, a new visual approach based on RGBD data was implemented to help a robot controller carry out intelligent manipulation tasks with flexible objects. The proposed method supervises the interaction between the grasped object and the robot hand in order to avoid poor contact between the fingertips and an object when there is neither force nor pressure data. This new approach is also used to measure changes to the shape of an object’s surfaces and so allows us to find deformations caused by inappropriate pressure being applied by the hand’s fingers. Test was carried out for grasping tasks involving several flexible household objects with a multi-fingered robot hand working in real time. Our approach generates pulses from the deformation detection method and sends an event message to the robot controller when surface deformation is detected. In comparison with other methods, the obtained results reveal that our visual pipeline does not use deformations models of objects and materials, as well as the approach works well both planar and 3D household objects in real time. In addition, our method does not depend on the pose of the robot hand because the location of the reference system is computed from a recognition process of a pattern located place at the robot forearm. The presented experiments demonstrate that the proposed method accomplishes a good monitoring of grasping task with several objects and different grasping configurations in indoor environments.
Resumo:
The aim of this paper is to analyse the proposed Directive on criminal sanctions for insider dealing and market manipulation (COM(2011)654 final), which represents the first exercise of the European Union competence provided for by Article 83(2) of the Treaty on the Functioning of the European Union. The proposal aims at harmonising the sanctioning regimes provided by the Member States for market abuse, imposing the introduction of criminal sanctions and providing an opportunity to critically reflect on the position taken by the Commission towards the use of criminal law. The paper will discuss briefly the evolution of the EU’s criminal law competence, focusing on the Lisbon Treaty. It will analyse the ‘essentiality standard’ for the harmonisation of criminal law included in Article 83(2) TFEU, concluding that this standard encompasses both the subsidiarity and the ultima ratio principles and implies important practical consequences for the Union’s legislator. The research will then focus on the proposed Directive, trying to assess if the Union’s legislator, notwithstanding the ‘symbolic’ function of this proposal in the financial crisis, provides consistent arguments on the respect of the ‘essentiality standard’. The paper will note that the proposal raises some concerns, because of the lack of a clear reliance on empirical data regarding the essential need for the introduction of criminal law provisions. It will be stressed that only the assessment of the essential need of an EU action, according to the standard set in Article 83(2) TFEU, can guarantee a coherent choice of the areas interested by the harmonisation process, preventing the legislator to choose on the basis of other grounds.
Resumo:
The FANOVA (or “Sobol’-Hoeffding”) decomposition of multivariate functions has been used for high-dimensional model representation and global sensitivity analysis. When the objective function f has no simple analytic form and is costly to evaluate, computing FANOVA terms may be unaffordable due to numerical integration costs. Several approximate approaches relying on Gaussian random field (GRF) models have been proposed to alleviate these costs, where f is substituted by a (kriging) predictor or by conditional simulations. Here we focus on FANOVA decompositions of GRF sample paths, and we notably introduce an associated kernel decomposition into 4 d 4d terms called KANOVA. An interpretation in terms of tensor product projections is obtained, and it is shown that projected kernels control both the sparsity of GRF sample paths and the dependence structure between FANOVA effects. Applications on simulated data show the relevance of the approach for designing new classes of covariance kernels dedicated to high-dimensional kriging.
Resumo:
Considers (87) S. 174, (87) H.R. 293, (87) H.R. 299, (87) H.R. 496, (87) H.R. 776, (87) H.R. 1762, (87) H.R. 1925, (87) H.R. 2008, (87) H.R. 8237.
Resumo:
Considers (73) S. 2915.
Resumo:
Considers (73) H.R. 5267, (73) H.R. 3083, (73) S. 1868, (73) H.R. 5950.
Resumo:
This paper proposes a novel application of fuzzy logic to web data mining for two basic problems of a website: popularity and satisfaction. Popularity means that people will visit the website while satisfaction refers to the usefulness of the site. We will illustrate that the popularity of a website is a fuzzy logic problem. It is an important characteristic of a website in order to survive in Internet commerce. The satisfaction of a website is also a fuzzy logic problem that represents the degree of success in the application of information technology to the business. We propose a framework of fuzzy logic for the representation of these two problems based on web data mining techniques to fuzzify the attributes of a website.
Resumo:
We consider the problem of assessing the number of clusters in a limited number of tissue samples containing gene expressions for possibly several thousands of genes. It is proposed to use a normal mixture model-based approach to the clustering of the tissue samples. One advantage of this approach is that the question on the number of clusters in the data can be formulated in terms of a test on the smallest number of components in the mixture model compatible with the data. This test can be carried out on the basis of the likelihood ratio test statistic, using resampling to assess its null distribution. The effectiveness of this approach is demonstrated on simulated data and on some microarray datasets, as considered previously in the bioinformatics literature. (C) 2004 Elsevier Inc. All rights reserved.
Resumo:
Although smoking is widely recognized as a major cause of cancer, there is little information on how it contributes to the global and regional burden of cancers in combination with other risk factors that affect background cancer mortality patterns. We used data from the American Cancer Society's Cancer Prevention Study II (CPS-II) and the WHO and IARC cancer mortality databases to estimate deaths from 8 clusters of site-specific cancers caused by smoking, for 14 epidemiologic subregions of the world, by age and sex. We used lung cancer mortality as an indirect marker for accumulated smoking hazard. CPS-II hazards were adjusted for important covariates. In the year 2000, an estimated 1.42 (95% CI 1.27-1.57) million cancer deaths in the world, 21% of total global cancer deaths, were caused by smoking. Of these, 1.18 million deaths were among men and 0.24 million among women; 625,000 (95% CI 485,000-749,000) smoking-caused cancer deaths occurred in the developing world and 794,000 (95% CI 749,000-840,000) in industrialized regions. Lung cancer accounted for 60% of smoking-attributable cancer mortality, followed by cancers of the upper aerodigestive tract (20%). Based on available data, more than one in every 5 cancer deaths in the world in the year 2000 were caused by smoking, making it possibly the single largest preventable cause of cancer mortality. There was significant variability across regions in the role of smoking as a cause of the different site-specific cancers. This variability illustrates the importance of coupling research and surveillance of smoking with that for other risk factors for more effective cancer prevention. (C) 2005 Wiley-Liss, Inc.
Resumo:
Electricity market price forecast is a changeling yet very important task for electricity market managers and participants. Due to the complexity and uncertainties in the power grid, electricity prices are highly volatile and normally carry with spikes. which may be (ens or even hundreds of times higher than the normal price. Such electricity spikes are very difficult to be predicted. So far. most of the research on electricity price forecast is based on the normal range electricity prices. This paper proposes a data mining based electricity price forecast framework, which can predict the normal price as well as the price spikes. The normal price can be, predicted by a previously proposed wavelet and neural network based forecast model, while the spikes are forecasted based on a data mining approach. This paper focuses on the spike prediction and explores the reasons for price spikes based on the measurement of a proposed composite supply-demand balance index (SDI) and relative demand index (RDI). These indices are able to reflect the relationship among electricity demand, electricity supply and electricity reserve capacity. The proposed model is based on a mining database including market clearing price, trading hour. electricity), demand, electricity supply and reserve. Bayesian classification and similarity searching techniques are used to mine the database to find out the internal relationships between electricity price spikes and these proposed. The mining results are used to form the price spike forecast model. This proposed model is able to generate forecasted price spike, level of spike and associated forecast confidence level. The model is tested with the Queensland electricity market data with promising results. Crown Copyright (C) 2004 Published by Elsevier B.V. All rights reserved.
Resumo:
Introduction: In the World Health Organization (WHO) MONICA (multinational MONItoring of trends and determinants in CArdiovascular disease) Project considerable effort was made to obtain basic data on non-respondents to community based surveys of cardiovascular risk factors. The first purpose of this paper is to examine differences in socio-economic and health profiles among respondents and non-respondents. The second purpose is to investigate the effect of non-response on estimates of trends. Methods:Socio-economic and health profile between respondents and non-respondents in the WHO MONICA Project final survey were compared. The potential effect of non-response on the trend estimates between the initial survey and final survey approximately ten years later was investigated using both MONICA data and hypothetical data. Results: In most of the populations, non-respondents were more likely to be single, less well educated, and had poorer lifestyles and health profiles than respondents. As an example of the consequences, temporal trends in prevalence of daily smokers are shown to be overestimated in most populations if they were based only on data from respondents. Conclusions: The socio-economic and health profiles of respondents and non-respondents differed fairly consistently across 27 populations. Hence, the estimators of population trends based on respondent data are likely to be biased. Declining response rates therefore pose a threat to the accuracy of estimates of risk factor trends in many countries.
Resumo:
Background: The structure of proteins may change as a result of the inherent flexibility of some protein regions. We develop and explore probabilistic machine learning methods for predicting a continuum secondary structure, i.e. assigning probabilities to the conformational states of a residue. We train our methods using data derived from high-quality NMR models. Results: Several probabilistic models not only successfully estimate the continuum secondary structure, but also provide a categorical output on par with models directly trained on categorical data. Importantly, models trained on the continuum secondary structure are also better than their categorical counterparts at identifying the conformational state for structurally ambivalent residues. Conclusion: Cascaded probabilistic neural networks trained on the continuum secondary structure exhibit better accuracy in structurally ambivalent regions of proteins, while sustaining an overall classification accuracy on par with standard, categorical prediction methods.
Resumo:
Background & Aims: Steatosis is a frequent histologic finding in chronic hepatitis C (CHC), but it is unclear whether steatosis is an independent predictor for liver fibrosis. We evaluated the association between steatosis and fibrosis and their common correlates in persons with CHC and in subgroup analyses according to hepatitis C virus (HCV) genotype and body mass index. Methods: We conducted a meta-analysis on individual data from 3068 patients with histologically confirmed CHC recruited from 10 clinical centers in Italy, Switzerland, France, Australia, and the United States. Results: Steatosis was present in 1561 patients (50.9%) and fibrosis in 2688 (87.6%). HCV genotype was 1 in :1694 cases (55.2%), 2 in 563 (18.4%), 3 in 669 (21.8%), and 4 in :142 (4.6%). By stepwise logistic regression, steatosis was associated independently with genotype 3, the presence of fibrosis, diabetes, hepatic inflammation, ongoing alcohol abuse, higher body mass index, and older age. Fibrosis was associated independently with inflammatory activity, steatosis, male sex, and older age, whereas HCV genotype 2 was associated with reduced fibrosis. In the subgroup analyses, the association between steatosis and fibrosis invariably was dependent on a simultaneous association between steatosis and hepatic inflammation. Conclusions: In this large and geographically different group of CHC patients, steatosis is confirmed as significantly and independently associated with fibrosis in CHC. Hepatic inflammation may mediate fibrogenesis in patients with liver steatosis. Control of metabolic factors (such as overweight, via lifestyle adjustments) appears important in the management of CHC.
Resumo:
This special issue is a collection of the selected papers published on the proceedings of the First International Conference on Advanced Data Mining and Applications (ADMA) held in Wuhan, China in 2005. The articles focus on the innovative applications of data mining approaches to the problems that involve large data sets, incomplete and noise data, or demand optimal solutions.