154 resultados para Generalized Cubes


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Nowadays, Opinion Mining is getting more important than before especially in doing analysis and forecasting about customers’ behavior for businesses purpose. The right decision in producing new products or services based on data about customers’ characteristics means profit for organization/company. This paper proposes a new architecture for Opinion Mining, which uses a multidimensional model to integrate customers’ characteristics and their comments about products (or services). The key step to achieve this objective is to transfer comments (opinions) to a fact table that includes several dimensions, such as, customers, products, time and locations. This research presents a comprehensive way to calculate customers’ orientation for all possible products’ attributes. A use case study is also presented in this paper to show the advantages of using OLAP and data cubes to analyze costumers’ opinions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Since the availability of 3D full body scanners and the associated software systems for operations with large point clouds, 3D anthropometry has been marketed as a breakthrough and milestone in ergonomic design. The assumptions made by the representatives of the 3D paradigm need to be critically reviewed though. 3D anthropometry has advantages as well as shortfalls, which need to be carefully considered. While it is apparent that the measurement of a full body point cloud allows for easier storage of raw data and improves quality control, the difficulties in calculation of standardized measurements from the point cloud are widely underestimated. Early studies that made use of 3D point clouds to derive anthropometric dimensions have shown unacceptable deviations from the standardized results measured manually. While 3D human point clouds provide a valuable tool to replicate specific single persons for further virtual studies, or personalize garment, their use in ergonomic design must be critically assessed. Ergonomic, volumetric problems are defined by their 2-dimensional boundary or one dimensional sections. A 1D/2D approach is therefore sufficient to solve an ergonomic design problem. As a consequence, all modern 3D human manikins are defined by the underlying anthropometric girths (2D) and lengths/widths (1D), which can be measured efficiently using manual techniques. Traditionally, Ergonomists have taken a statistical approach to design for generalized percentiles of the population rather than for a single user. The underlying method is based on the distribution function of meaningful single and two-dimensional anthropometric variables. Compared to these variables, the distribution of human volume has no ergonomic relevance. On the other hand, if volume is to be seen as a two-dimensional integral or distribution function of length and girth, the calculation of combined percentiles – a common ergonomic requirement - is undefined. Consequently, we suggest to critically review the cost and use of 3D anthropometry. We also recommend making proper use of widely available single and 2-dimensional anthropometric data in ergonomic design.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Volume measurements are useful in many branches of science and medicine. They are usually accomplished by acquiring a sequence of cross sectional images through the object using an appropriate scanning modality, for example x-ray computed tomography (CT), magnetic resonance (MR) or ultrasound (US). In the cases of CT and MR, a dividing cubes algorithm can be used to describe the surface as a triangle mesh. However, such algorithms are not suitable for US data, especially when the image sequence is multiplanar (as it usually is). This problem may be overcome by manually tracing regions of interest (ROIs) on the registered multiplanar images and connecting the points into a triangular mesh. In this paper we describe and evaluate a new discreet form of Gauss’ theorem which enables the calculation of the volume of any enclosed surface described by a triangular mesh. The volume is calculated by summing the vector product of the centroid, area and normal of each surface triangle. The algorithm was tested on computer-generated objects, US-scanned balloons, livers and kidneys and CT-scanned clay rocks. The results, expressed as the mean percentage difference ± one standard deviation were 1.2 ± 2.3, 5.5 ± 4.7, 3.0 ± 3.2 and −1.2 ± 3.2% for balloons, livers, kidneys and rocks respectively. The results compare favourably with other volume estimation methods such as planimetry and tetrahedral decomposition.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the field of process mining, the use of event logs for the purpose of root cause analysis is increasingly studied. In such an analysis, the availability of attributes/features that may explain the root cause of some phenomena is crucial. Currently, the process of obtaining these attributes from raw event logs is performed more or less on a case-by-case basis: there is still a lack of generalized systematic approach that captures this process. This paper proposes a systematic approach to enrich and transform event logs in order to obtain the required attributes for root cause analysis using classical data mining techniques, the classification techniques. This approach is formalized and its applicability has been validated using both self-generated and publicly-available logs.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper investigates relationship between traffic conditions and the crash occurrence likelihood (COL) using the I-880 data. To remedy the data limitations and the methodological shortcomings suffered by previous studies, a multiresolution data processing method is proposed and implemented, upon which binary logistic models were developed. The major findings of this paper are: 1) traffic conditions have significant impacts on COL at the study site; Specifically, COL in a congested (transitioning) traffic flow is about 6 (1.6) times of that in a free flow condition; 2)Speed variance alone is not sufficient to capture traffic dynamics’ impact on COL; a traffic chaos indicator that integrates speed, speed variance, and flow is proposed and shows a promising performance; 3) Models based on aggregated data shall be interpreted with caution. Generally, conclusions obtained from such models shall not be generalized to individual vehicles (drivers) without further evidences using high-resolution data and it is dubious to either claim or disclaim speed kills based on aggregated data.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Traditional crash prediction models, such as generalized linear regression models, are incapable of taking into account the multilevel data structure, which extensively exists in crash data. Disregarding the possible within-group correlations can lead to the production of models giving unreliable and biased estimates of unknowns. This study innovatively proposes a -level hierarchy, viz. (Geographic region level – Traffic site level – Traffic crash level – Driver-vehicle unit level – Vehicle-occupant level) Time level, to establish a general form of multilevel data structure in traffic safety analysis. To properly model the potential cross-group heterogeneity due to the multilevel data structure, a framework of Bayesian hierarchical models that explicitly specify multilevel structure and correctly yield parameter estimates is introduced and recommended. The proposed method is illustrated in an individual-severity analysis of intersection crashes using the Singapore crash records. This study proved the importance of accounting for the within-group correlations and demonstrated the flexibilities and effectiveness of the Bayesian hierarchical method in modeling multilevel structure of traffic crash data.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Objective: Preclinical and clinical data suggest that lipid biology is integral to brain development and neurodegeneration. Both aspects are proposed as being important in the pathogenesis of schizophrenia. The purpose of this paper is to examine the implications of lipid biology, in particular the role of essential fatty acids (EFA), for schizophrenia. Methods: Medline databases were searched from 1966 to 2001 followed by the crosschecking of references. Results: Most studies investigating lipids in schizophrenia described reduced EFA, altered glycerophospholipids and an increased activity of a calcium-independent phospholipase A2 in blood cells and in post-mortem brain tissue. Additionally, in vivo brain phosphorus-31 Magnetic Resonance Spectroscopy (31P-MRS) demonstrated lower phosphomonoesters (implying reduced membrane precursors) in first- and multi-episode patients. In contrast, phosphodiesters were elevated mainly in first-episode patients (implying increased membrane breakdown products), whereas inconclusive results were found in chronic patients. EFA supplementation trials in chronic patient populations with residual symptoms have demonstrated conflicting results. More consistent results were observed in the early and symptomatic stages of illness, especially if EFA with a high proportion of eicosapentaenoic acid was used. Conclusion: Peripheral blood cell, brain necropsy and 31P-MRS analysis reveal a disturbed lipid biology, suggesting generalized membrane alterations in schizophrenia. 31P-MRS data suggest increased membrane turnover at illness onset and persisting membrane abnormalities in established schizophrenia. Cellular processes regulating membrane lipid metabolism are potential new targets for antipsychotic drugs and might explain the mechanism of action of treatments such as eicosapentaenoic acid.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Generalized fractional partial differential equations have now found wide application for describing important physical phenomena, such as subdiffusive and superdiffusive processes. However, studies of generalized multi-term time and space fractional partial differential equations are still under development. In this paper, the multi-term time-space Caputo-Riesz fractional advection diffusion equations (MT-TSCR-FADE) with Dirichlet nonhomogeneous boundary conditions are considered. The multi-term time-fractional derivatives are defined in the Caputo sense, whose orders belong to the intervals [0, 1], [1, 2] and [0, 2], respectively. These are called respectively the multi-term time-fractional diffusion terms, the multi-term time-fractional wave terms and the multi-term time-fractional mixed diffusion-wave terms. The space fractional derivatives are defined as Riesz fractional derivatives. Analytical solutions of three types of the MT-TSCR-FADE are derived with Dirichlet boundary conditions. By using Luchko's Theorem (Acta Math. Vietnam., 1999), we proposed some new techniques, such as a spectral representation of the fractional Laplacian operator and the equivalent relationship between fractional Laplacian operator and Riesz fractional derivative, that enabled the derivation of the analytical solutions for the multi-term time-space Caputo-Riesz fractional advection-diffusion equations. © 2012.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

his paper formulates an edge-based smoothed conforming point interpolation method (ES-CPIM) for solid mechanics using the triangular background cells. In the ES-CPIM, a technique for obtaining conforming PIM shape functions (CPIM) is used to create a continuous and piecewise quadratic displacement field over the whole problem domain. The smoothed strain field is then obtained through smoothing operation over each smoothing domain associated with edges of the triangular background cells. The generalized smoothed Galerkin weak form is then used to create the discretized system equations. Numerical studies have demonstrated that the ES-CPIM possesses the following good properties: (1) ES-CPIM creates conforming quadratic PIM shape functions, and can always pass the standard patch test; (2) ES-CPIM produces a quadratic displacement field without introducing any additional degrees of freedom; (3) The results of ES-CPIM are generally of very high accuracy.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents two novel concepts to enhance the accuracy of damage detection using the Modal Strain Energy based Damage Index (MSEDI) with the presence of noise in the mode shape data. Firstly, the paper presents a sequential curve fitting technique that reduces the effect of noise on the calculation process of the MSEDI, more effectively than the two commonly used curve fitting techniques; namely, polynomial and Fourier’s series. Secondly, a probability based Generalized Damage Localization Index (GDLI) is proposed as a viable improvement to the damage detection process. The study uses a validated ABAQUS finite-element model of a reinforced concrete beam to obtain mode shape data in the undamaged and damaged states. Noise is simulated by adding three levels of random noise (1%, 3%, and 5%) to the mode shape data. Results show that damage detection is enhanced with increased number of modes and samples used with the GDLI.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Road traffic accidents can be reduced by providing early warning to drivers through wireless ad hoc networks. When a vehicle detects an event that may lead to an imminent accident, the vehicle disseminates emergency messages to alert other vehicles that may be endangered by the accident. In many existing broadcast-based dissemination schemes, emergency messages may be sent to a large number of vehicles in the area and can be propagated to only one direction. This paper presents a more efficient context aware multicast protocol that disseminates messages only to endangered vehicles that may be affected by the emergency event. The endangered vehicles can be identified by calculating the interaction among vehicles based on their motion properties. To ensure fast delivery, the dissemination follows a routing path obtained by computing a minimum delay tree. The multicast protocol uses a generalized approach that can support any arbitrary road topology. The performance of the multicast protocol is compared with existing broadcast protocols by simulating chain collision accidents on a typical highway. Simulation results show that the multicast protocol outperforms the other protocols in terms of reliability, efficiency, and latency.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background Previous studies have found that high and cold temperatures increase the risk of childhood diarrhea. However, little is known about whether the within-day variation of temperature has any effect on childhood diarrhea. Methods A Poisson generalized linear regression model combined with a distributed lag non-linear model was used to examine the relationship between diurnal temperature range and emergency department admissions for diarrhea among children under five years in Brisbane, from 1st January 2003 to 31st December 2009. Results There was a statistically significant relationship between diurnal temperature range and childhood diarrhea. The effect of diurnal temperature range on childhood diarrhea was the greatest at one day lag, with a 3% (95% confidence interval: 2%–5%) increase of emergency department admissions per 1°C increment of diurnal temperature range. Conclusion Within-day variation of temperature appeared to be a risk factor for childhood diarrhea. The incidence of childhood diarrhea may increase if climate variability increases as predicted.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In their recent review of prior studies examining firm performance, Klapper and Parker (2010, p.7) conclude that “women entrepreneurs tend to underperform relative to their male counterparts.” However, Robb and Watson (2011) argue that much of this prior research is based on inappropriate performance measures and/or does not adequately control (due to data limitations) for important demographic differences. Given the conflicting findings reported in the literature, the aim of this study is to replicate the study by Robb and Watson (2011) to see if their findings can be generalized to another geographical location. Our results, based on an analysis of 209 female-owned and 263 male-owned young Australian firms, confirm those of Robb and Watson (2011). We believe that this outcome should help dispel the female underperformance myth; which if left unchallenged could result in inappropriate policy decisions and, more importantly, could discourage women from establishing new ventures.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The relationship between design process and business systems has been of interest to both practitioners and researchers exploring the numerous opportunities and challenges of this unlikely relationship. Often the relationship is presented as building design thinking capability within an organization, which can be broadly described as the union of design and strategy. Brown (2008) notes that design thinking is ‘‘a discipline that uses the designer’s sensibility and methods to match people’s needs with what is technically feasible and what business strategy can convert into customer value and market opportunities’’ (p. 1). The value that design thinking brings to an organization is a different way of framing situations and possibilities, doing things, and tackling problems: essentially a cultural transformation of the way it undertakes its business. The work of Martin (2009) has clearly shown the generalized differences between design thinking and business thinking, highlighting many instances in which these differences have been overcome, but also noting the many obstacles of trying to unify both approaches within an organization. Liedtka (2010) encourages firms to try and persist in overcoming these barriers, as she has noted that ‘‘business strategy desperately needs design ... because design is all about action and business strategy too often turns out to be only about talk ... fewer than 10 percent of new strategies are ever fully executed’’ (p. 9).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents a combined structure for using real, complex, and binary valued vectors for semantic representation. The theory, implementation, and application of this structure are all significant. For the theory underlying quantum interaction, it is important to develop a core set of mathematical operators that describe systems of information, just as core mathematical operators in quantum mechanics are used to describe the behavior of physical systems. The system described in this paper enables us to compare more traditional quantum mechanical models (which use complex state vectors), alongside more generalized quantum models that use real and binary vectors. The implementation of such a system presents fundamental computational challenges. For large and sometimes sparse datasets, the demands on time and space are different for real, complex, and binary vectors. To accommodate these demands, the Semantic Vectors package has been carefully adapted and can now switch between different number types comparatively seamlessly. This paper describes the key abstract operations in our semantic vector models, and describes the implementations for real, complex, and binary vectors. We also discuss some of the key questions that arise in the field of quantum interaction and informatics, explaining how the wide availability of modelling options for different number fields will help to investigate some of these questions.