975 resultados para tag data structure


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Virtual prototyping emerges as a new technology to replace existing physical prototypes for product evaluation, which are costly and time consuming to manufacture. Virtualization technology allows engineers and ergonomists to perform virtual builds and different ergonomic analyses on a product. Digital Human Modelling (DHM) software packages such as Siemens Jack, often integrate with CAD systems to provide a virtual environment which allows investigation of operator and product compatibility. Although the integration between DHM and CAD systems allows for the ergonomic analysis of anthropometric design, human musculoskeletal, multi-body modelling software packages such as the AnyBody Modelling System (AMS) are required to support physiologic design. They provide muscular force analysis, estimate human musculoskeletal strain and help address human comfort assessment. However, the independent characteristics of the modelling systems Jack and AMS constrain engineers and ergonomists in conducting a complete ergonomic analysis. AMS is a stand alone programming system without a capability to integrate into CAD environments. Jack is providing CAD integrated human-in-the-loop capability, but without considering musculoskeletal activity. Consequently, engineers and ergonomists need to perform many redundant tasks during product and process design. Besides, the existing biomechanical model in AMS uses a simplified estimation of body proportions, based on a segment mass ratio derived scaling approach. This is insufficient to represent user populations anthropometrically correct in AMS. In addition, sub-models are derived from different sources of morphologic data and are therefore anthropometrically inconsistent. Therefore, an interface between the biomechanical AMS and the virtual human model Jack was developed to integrate a musculoskeletal simulation with Jack posture modeling. This interface provides direct data exchange between the two man-models, based on a consistent data structure and common body model. The study assesses kinematic and biomechanical model characteristics of Jack and AMS, and defines an appropriate biomechanical model. The information content for interfacing the two systems is defined and a protocol is identified. The interface program is developed and implemented through Tcl and Jack-script(Python), and interacts with the AMS console application to operate AMS procedures.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Traditional crash prediction models, such as generalized linear regression models, are incapable of taking into account the multilevel data structure, which extensively exists in crash data. Disregarding the possible within-group correlations can lead to the production of models giving unreliable and biased estimates of unknowns. This study innovatively proposes a -level hierarchy, viz. (Geographic region level – Traffic site level – Traffic crash level – Driver-vehicle unit level – Vehicle-occupant level) Time level, to establish a general form of multilevel data structure in traffic safety analysis. To properly model the potential cross-group heterogeneity due to the multilevel data structure, a framework of Bayesian hierarchical models that explicitly specify multilevel structure and correctly yield parameter estimates is introduced and recommended. The proposed method is illustrated in an individual-severity analysis of intersection crashes using the Singapore crash records. This study proved the importance of accounting for the within-group correlations and demonstrated the flexibilities and effectiveness of the Bayesian hierarchical method in modeling multilevel structure of traffic crash data.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

For robots to use language effectively, they need to refer to combinations of existing concepts, as well as concepts that have been directly experienced. In this paper, we introduce the term generative grounding to refer to the establishment of shared meaning for concepts referred to using relational terms. We investigated a spatial domain, which is both experienced and constructed using mobile robots with cognitive maps. The robots, called Lingodroids, established lexicons for locations, distances, and directions through structured conversations called where-are-we, how-far, what-direction, and where-is-there conversations. Distributed concept construction methods were used to create flexible concepts, based on a data structure called a distributed lexicon table. The lexicon was extended from words for locations, termed toponyms, to words for the relational terms of distances and directions. New toponyms were then learned using these relational operators. Effective grounding was tested by using the new toponyms as targets for go-to games, in which the robots independently navigated to named locations. The studies demonstrate how meanings can be extended from grounding in shared physical experiences to grounding in constructed cognitive experiences, giving the robots a language that refers to their direct experiences, and to constructed worlds that are beyond the here-and-now.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Service robots that operate in human environments will accomplish tasks most efficiently and least disruptively if they have the capability to mimic and understand the motion patterns of the people in their workspace. This work demonstrates how a robot can create a humancentric navigational map online, and that this map re ects changes in the environment that trigger altered motion patterns of people. An RGBD sensor mounted on the robot is used to detect and track people moving through the environment. The trajectories are clustered online and organised into a tree-like probabilistic data structure which can be used to detect anomalous trajectories. A costmap is reverse engineered from the clustered trajectories that can then inform the robot's onboard planning process. Results show that the resultant paths taken by the robot mimic expected human behaviour and can allow the robot to respond to altered human motion behaviours in the environment.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Data structures such as k-D trees and hierarchical k-means trees perform very well in approximate k nearest neighbour matching, but are only marginally more effective than linear search when performing exact matching in high-dimensional image descriptor data. This paper presents several improvements to linear search that allows it to outperform existing methods and recommends two approaches to exact matching. The first method reduces the number of operations by evaluating the distance measure in order of significance of the query dimensions and terminating when the partial distance exceeds the search threshold. This method does not require preprocessing and significantly outperforms existing methods. The second method improves query speed further by presorting the data using a data structure called d-D sort. The order information is used as a priority queue to reduce the time taken to find the exact match and to restrict the range of data searched. Construction of the d-D sort structure is very simple to implement, does not require any parameter tuning, and requires significantly less time than the best-performing tree structure, and data can be added to the structure relatively efficiently.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The count-min sketch is a useful data structure for recording and estimating the frequency of string occurrences, such as passwords, in sub-linear space with high accuracy. However, it cannot be used to draw conclusions on groups of strings that are similar, for example close in Hamming distance. This paper introduces a variant of the count-min sketch which allows for estimating counts within a specified Hamming distance of the queried string. This variant can be used to prevent users from choosing popular passwords, like the original sketch, but it also allows for a more efficient method of analysing password statistics.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Background Multilevel and spatial models are being increasingly used to obtain substantive information on area-level inequalities in cancer survival. Multilevel models assume independent geographical areas, whereas spatial models explicitly incorporate geographical correlation, often via a conditional autoregressive prior. However the relative merits of these methods for large population-based studies have not been explored. Using a case-study approach, we report on the implications of using multilevel and spatial survival models to study geographical inequalities in all-cause survival. Methods Multilevel discrete-time and Bayesian spatial survival models were used to study geographical inequalities in all-cause survival for a population-based colorectal cancer cohort of 22,727 cases aged 20–84 years diagnosed during 1997–2007 from Queensland, Australia. Results Both approaches were viable on this large dataset, and produced similar estimates of the fixed effects. After adding area-level covariates, the between-area variability in survival using multilevel discrete-time models was no longer significant. Spatial inequalities in survival were also markedly reduced after adjusting for aggregated area-level covariates. Only the multilevel approach however, provided an estimation of the contribution of geographical variation to the total variation in survival between individual patients. Conclusions With little difference observed between the two approaches in the estimation of fixed effects, multilevel models should be favored if there is a clear hierarchical data structure and measuring the independent impact of individual- and area-level effects on survival differences is of primary interest. Bayesian spatial analyses may be preferred if spatial correlation between areas is important and if the priority is to assess small-area variations in survival and map spatial patterns. Both approaches can be readily fitted to geographically enabled survival data from international settings

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A forest of quadtrees is a refinement of a quadtree data structure that is used to represent planar regions. A forest of quadtrees provides space savings over regular quadtrees by concentrating vital information. The paper presents some of the properties of a forest of quadtrees and studies the storage requirements for the case in which a single 2m × 2m region is equally likely to occur in any position within a 2n × 2n image. Space and time efficiency are investigated for the forest-of-quadtrees representation as compared with the quadtree representation for various cases.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A variety of data structures such as inverted file, multi-lists, quad tree, k-d tree, range tree, polygon tree, quintary tree, multidimensional tries, segment tree, doubly chained tree, the grid file, d-fold tree. super B-tree, Multiple Attribute Tree (MAT), etc. have been studied for multidimensional searching and related problems. Physical data base organization, which is an important application of multidimensional searching, is traditionally and mostly handled by employing inverted file. This study proposes MAT data structure for bibliographic file systems, by illustrating the superiority of MAT data structure over inverted file. Both the methods are compared in terms of preprocessing, storage and query costs. Worst-case complexity analysis of both the methods, for a partial match query, is carried out in two cases: (a) when directory resides in main memory, (b) when directory resides in secondary memory. In both cases, MAT data structure is shown to be more efficient than the inverted file method. Arguments are given to illustrate the superiority of MAT data structure in an average case also. An efficient adaptation of MAT data structure, that exploits the special features of MAT structure and bibliographic files, is proposed for bibliographic file systems. In this adaptation, suitable techniques for fixing and ranking of the attributes for MAT data structure are proposed. Conclusions and proposals for future research are presented.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Topology-based methods have been successfully used for the analysis and visualization of piecewise-linear functions defined on triangle meshes. This paper describes a mechanism for extending these methods to piecewise-quadratic functions defined on triangulations of surfaces. Each triangular patch is tessellated into monotone regions, so that existing algorithms for computing topological representations of piecewise-linear functions may be applied directly to the piecewise-quadratic function. In particular, the tessellation is used for computing the Reeb graph, a topological data structure that provides a succinct representation of level sets of the function.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The methodology of designing normative terminological products has been described in several guides and international standards. However, this methodology is not always applicable to designing translation-oriented terminological products which differ greatly from normative ones in terms of volume, function, and primary target group. This dissertation has three main goals. The first is to revise and enrich the stock of concepts and terms required in the process of designing an LSP dictionary for translators. The second is to detect, classify, and describe the factors which determine the characteristics of an LSP dictionary for translators and affect the process of its compilation. The third goal is to provide recommendations on different aspects of dictionary design. The study is based on an analysis of dictionaries, dictionary reviews, literature on translation-oriented lexicography, material from several dictionary projects, and the results of questionnaires. Thorough analysis of the concept of a dictionary helped us to compile a list of designable characteristics of a dictionary. These characteristics include target group, function, links to other resources, data carrier, list of lemmata, information about the lemmata, composition of other parts of the dictionary, compression of the data, structure of the data, and access structure. The factors which determine the characteristics of a dictionary have been divided into those derived from the needs of the intended users and those reflecting the restrictions of the real world (e.g. characteristics of the data carrier and organizational factors) and attitudes (e.g. traditions and scientific paradigms). The designer of a dictionary is recommended to take the intended users' needs as the starting point and aim at finding the best compromise between the conflicting factors. When designing an LSP dictionary, much depends on the level of knowledge of the intended users about the domain in question as well as their general linguistic competence, LSP competence, and lexicographic competence. This dissertation discusses the needs of LSP translators and the role of the dictionary in the process of translation of an LSP text. It also emphasizes the importance of planning lexicographic products and activities, and addresses many practical aspects of dictionary design.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

There is a relative absence of sociological and cultural research on how people deal with the death of a family member in the contemporary western societies. Research on this topic has been dominated by the experts of psychology, psychiatry and therapy, who mention the social context only in passing, if at all. This gives an impression that the white westerners bereavement experience is a purely psychological phenomenon, an inner journey, which follows a natural, universal path. Yet, as Tony Walter (1999) states, ignoring the influence of culture not only impoverishes the understanding of those work with bereaved people, but it also impoverishes sociology and cultural studies by excluding from their domain a key social phenomenon. This study explores the cultural dimension of grief through narratives told by fifteen of recently bereaved Finnish women. Focussing on one sex only, the study rests on the assumption of the gendered nature of bereavement experience. However, the aim of the study is not to pinpoint the gender differences in grief and mourning, but to shed light on women s ways of dealing with the loss of a loved one in a social context. Furthermore, the study focuses on a certain kind of loss: the death of an elderly parent. Due to the growth in the life expectancy rate, this has presumably become the most typical type of bereavement in contemporary, ageing societies. Most of population will face the death of a parent as they reach the middle years of the life course. The data of this study is gathered with interviews, in which the interviewees were invited to tell a narrative of their bereavement. Narrative constitutes a central concept in this study. It refers to a particular form of talk, which is organised around consequential events. But there are also other, deeper layers that have been added to this concept. Several scholars see narratives as the most important way in which we make sense of experience. Personal narratives provide rich material for mapping the interconnections between individual and culture. As a form of thought, narrative marries singular circumstances with shared expectations and understandings that are learned through participation in a specific culture (Garro & Mattingly 2000). This study attempts to capture the cultural dimension of narrative with the concept of script , which originates in cognitive science (Schank & Abelson 1977) and has recently been adopted to narratology (Herman 2002). Script refers to a data structure that informs how events usually unfold in certain situations. Scripts are used in interpreting events and representing them verbally to others. They are based on dominant forms of knowledge that vary according to time and place. The questions that were posed in this study are the following. What kind of experiences bereaved daughters narrate? What kind of cultural scripts they employ as they attempt to make sense of these experiences? How these scripts are used in their narratives? It became apparent that for the most of the daughters interviewed in this study the single most important part of the bereavement narrative was to form an account of how and why the parent died. They produced lengthy and detailed descriptions of the last stage of a parent s life in contrast with the rest of the interview. These stories took their start from a turn in the parent s physical condition, from which the dying process could in retrospect be seen to have started, and which often took place several years before the death. In addition, daughters also talked about their grief reactions and how they have adjusted to a life without the deceased parent. The ways in which the last stage of life was told reflect not only the characteristic features of late modernity but also processes of marginalisation and exclusion. Revivalist script and medical script, identified by Clive Seale as the dominant, competing models for dying well in the late modern societies, were not widely utilised in the narratives. They could only be applied in situations in which the parent had died from cancer and at somewhat younger age than the average. Death that took place in deep old age was told in a different way. The lack of positive models for narrating this kind of death was acknowledged in the study. This can be seen as a symptom of the societal devaluing of the deaths of older people and it affects also daughters accounts of their grief. Several daughters told about situations in which their loss, although subjectively experienced, was nonetheless denied by other people.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Template matching is concerned with measuring the similarity between patterns of two objects. This paper proposes a memory-based reasoning approach for pattern recognition of binary images with a large template set. It seems that memory-based reasoning intrinsically requires a large database. Moreover, some binary image recognition problems inherently need large template sets, such as the recognition of Chinese characters which needs thousands of templates. The proposed algorithm is based on the Connection Machine, which is the most massively parallel machine to date, using a multiresolution method to search for the matching template. The approach uses the pyramid data structure for the multiresolution representation of templates and the input image pattern. For a given binary image it scans the template pyramid searching the match. A binary image of N × N pixels can be matched in O(log N) time complexity by our algorithm and is independent of the number of templates. Implementation of the proposed scheme is described in detail.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We present a randomized and a deterministic data structure for maintaining a dynamic family of sequences under equality tests of pairs of sequences and creations of new sequences by joining or splitting existing sequences. Both data structures support equality tests in O(1) time. The randomized version supports new sequence creations in O(log(2) n) expected time where n is the length of the sequence created. The deterministic solution supports sequence creations in O(log n (log m log* m + log n)) time for the mth operation.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The Morse-Smale complex is a useful topological data structure for the analysis and visualization of scalar data. This paper describes an algorithm that processes all mesh elements of the domain in parallel to compute the Morse-Smale complex of large two-dimensional data sets at interactive speeds. We employ a reformulation of the Morse-Smale complex using Forman's Discrete Morse Theory and achieve scalability by computing the discrete gradient using local accesses only. We also introduce a novel approach to merge gradient paths that ensures accurate geometry of the computed complex. We demonstrate that our algorithm performs well on both multicore environments and on massively parallel architectures such as the GPU.