987 resultados para hypotheses


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Object detection and recognition are important problems in computer vision. The challenges of these problems come from the presence of noise, background clutter, large within class variations of the object class and limited training data. In addition, the computational complexity in the recognition process is also a concern in practice. In this thesis, we propose one approach to handle the problem of detecting an object class that exhibits large within-class variations, and a second approach to speed up the classification processes. In the first approach, we show that foreground-background classification (detection) and within-class classification of the foreground class (pose estimation) can be jointly solved with using a multiplicative form of two kernel functions. One kernel measures similarity for foreground-background classification. The other kernel accounts for latent factors that control within-class variation and implicitly enables feature sharing among foreground training samples. For applications where explicit parameterization of the within-class states is unavailable, a nonparametric formulation of the kernel can be constructed with a proper foreground distance/similarity measure. Detector training is accomplished via standard Support Vector Machine learning. The resulting detectors are tuned to specific variations in the foreground class. They also serve to evaluate hypotheses of the foreground state. When the image masks for foreground objects are provided in training, the detectors can also produce object segmentation. Methods for generating a representative sample set of detectors are proposed that can enable efficient detection and tracking. In addition, because individual detectors verify hypotheses of foreground state, they can also be incorporated in a tracking-by-detection frame work to recover foreground state in image sequences. To run the detectors efficiently at the online stage, an input-sensitive speedup strategy is proposed to select the most relevant detectors quickly. The proposed approach is tested on data sets of human hands, vehicles and human faces. On all data sets, the proposed approach achieves improved detection accuracy over the best competing approaches. In the second part of the thesis, we formulate a filter-and-refine scheme to speed up recognition processes. The binary outputs of the weak classifiers in a boosted detector are used to identify a small number of candidate foreground state hypotheses quickly via Hamming distance or weighted Hamming distance. The approach is evaluated in three applications: face recognition on the face recognition grand challenge version 2 data set, hand shape detection and parameter estimation on a hand data set, and vehicle detection and estimation of the view angle on a multi-pose vehicle data set. On all data sets, our approach is at least five times faster than simply evaluating all foreground state hypotheses with virtually no loss in classification accuracy.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A common design of an object recognition system has two steps, a detection step followed by a foreground within-class classification step. For example, consider face detection by a boosted cascade of detectors followed by face ID recognition via one-vs-all (OVA) classifiers. Another example is human detection followed by pose recognition. Although the detection step can be quite fast, the foreground within-class classification process can be slow and becomes a bottleneck. In this work, we formulate a filter-and-refine scheme, where the binary outputs of the weak classifiers in a boosted detector are used to identify a small number of candidate foreground state hypotheses quickly via Hamming distance or weighted Hamming distance. The approach is evaluated in three applications: face recognition on the FRGC V2 data set, hand shape detection and parameter estimation on a hand data set and vehicle detection and view angle estimation on a multi-view vehicle data set. On all data sets, our approach has comparable accuracy and is at least five times faster than the brute force approach.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Gesture spotting is the challenging task of locating the start and end frames of the video stream that correspond to a gesture of interest, while at the same time rejecting non-gesture motion patterns. This paper proposes a new gesture spotting and recognition algorithm that is based on the continuous dynamic programming (CDP) algorithm, and runs in real-time. To make gesture spotting efficient a pruning method is proposed that allows the system to evaluate a relatively small number of hypotheses compared to CDP. Pruning is implemented by a set of model-dependent classifiers, that are learned from training examples. To make gesture spotting more accurate a subgesture reasoning process is proposed that models the fact that some gesture models can falsely match parts of other longer gestures. In our experiments, the proposed method with pruning and subgesture modeling is an order of magnitude faster and 18% more accurate compared to the original CDP algorithm.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Spotting patterns of interest in an input signal is a very useful task in many different fields including medicine, bioinformatics, economics, speech recognition and computer vision. Example instances of this problem include spotting an object of interest in an image (e.g., a tumor), a pattern of interest in a time-varying signal (e.g., audio analysis), or an object of interest moving in a specific way (e.g., a human's body gesture). Traditional spotting methods, which are based on Dynamic Time Warping or hidden Markov models, use some variant of dynamic programming to register the pattern and the input while accounting for temporal variation between them. At the same time, those methods often suffer from several shortcomings: they may give meaningless solutions when input observations are unreliable or ambiguous, they require a high complexity search across the whole input signal, and they may give incorrect solutions if some patterns appear as smaller parts within other patterns. In this thesis, we develop a framework that addresses these three problems, and evaluate the framework's performance in spotting and recognizing hand gestures in video. The first contribution is a spatiotemporal matching algorithm that extends the dynamic programming formulation to accommodate multiple candidate hand detections in every video frame. The algorithm finds the best alignment between the gesture model and the input, and simultaneously locates the best candidate hand detection in every frame. This allows for a gesture to be recognized even when the hand location is highly ambiguous. The second contribution is a pruning method that uses model-specific classifiers to reject dynamic programming hypotheses with a poor match between the input and model. Pruning improves the efficiency of the spatiotemporal matching algorithm, and in some cases may improve the recognition accuracy. The pruning classifiers are learned from training data, and cross-validation is used to reduce the chance of overpruning. The third contribution is a subgesture reasoning process that models the fact that some gesture models can falsely match parts of other, longer gestures. By integrating subgesture reasoning the spotting algorithm can avoid the premature detection of a subgesture when the longer gesture is actually being performed. Subgesture relations between pairs of gestures are automatically learned from training data. The performance of the approach is evaluated on two challenging video datasets: hand-signed digits gestured by users wearing short sleeved shirts, in front of a cluttered background, and American Sign Language (ASL) utterances gestured by ASL native signers. The experiments demonstrate that the proposed method is more accurate and efficient than competing approaches. The proposed approach can be generally applied to alignment or search problems with multiple input observations, that use dynamic programming to find a solution.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In a constantly changing world, humans are adapted to alternate routinely between attending to familiar objects and testing hypotheses about novel ones. We can rapidly learn to recognize and narne novel objects without unselectively disrupting our memories of familiar ones. We can notice fine details that differentiate nearly identical objects and generalize across broad classes of dissimilar objects. This chapter describes a class of self-organizing neural network architectures--called ARTMAP-- that are capable of fast, yet stable, on-line recognition learning, hypothesis testing, and naming in response to an arbitrary stream of input patterns (Carpenter, Grossberg, Markuzon, Reynolds, and Rosen, 1992; Carpenter, Grossberg, and Reynolds, 1991). The intrinsic stability of ARTMAP allows the system to learn incrementally for an unlimited period of time. System stability properties can be traced to the structure of its learned memories, which encode clusters of attended features into its recognition categories, rather than slow averages of category inputs. The level of detail in the learned attentional focus is determined moment-by-moment, depending on predictive success: an error due to over-generalization automatically focuses attention on additional input details enough of which are learned in a new recognition category so that the predictive error will not be repeated. An ARTMAP system creates an evolving map between a variable number of learned categories that compress one feature space (e.g., visual features) to learned categories of another feature space (e.g., auditory features). Input vectors can be either binary or analog. Computational properties of the networks enable them to perform significantly better in benchmark studies than alternative machine learning, genetic algorithm, or neural network models. Some of the critical problems that challenge and constrain any such autonomous learning system will next be illustrated. Design principles that work together to solve these problems are then outlined. These principles are realized in the ARTMAP architecture, which is specified as an algorithm. Finally, ARTMAP dynamics are illustrated by means of a series of benchmark simulations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This article introduces a new neural network architecture, called ARTMAP, that autonomously learns to classify arbitrarily many, arbitrarily ordered vectors into recognition categories based on predictive success. This supervised learning system is built up from a pair of Adaptive Resonance Theory modules (ARTa and ARTb) that are capable of self-organizing stable recognition categories in response to arbitrary sequences of input patterns. During training trials, the ARTa module receives a stream {a^(p)} of input patterns, and ARTb receives a stream {b^(p)} of input patterns, where b^(p) is the correct prediction given a^(p). These ART modules are linked by an associative learning network and an internal controller that ensures autonomous system operation in real time. During test trials, the remaining patterns a^(p) are presented without b^(p), and their predictions at ARTb are compared with b^(p). Tested on a benchmark machine learning database in both on-line and off-line simulations, the ARTMAP system learns orders of magnitude more quickly, efficiently, and accurately than alternative algorithms, and achieves 100% accuracy after training on less than half the input patterns in the database. It achieves these properties by using an internal controller that conjointly maximizes predictive generalization and minimizes predictive error by linking predictive success to category size on a trial-by-trial basis, using only local operations. This computation increases the vigilance parameter ρa of ARTa by the minimal amount needed to correct a predictive error at ARTb· Parameter ρa calibrates the minimum confidence that ARTa must have in a category, or hypothesis, activated by an input a^(p) in order for ARTa to accept that category, rather than search for a better one through an automatically controlled process of hypothesis testing. Parameter ρa is compared with the degree of match between a^(p) and the top-down learned expectation, or prototype, that is read-out subsequent to activation of an ARTa category. Search occurs if the degree of match is less than ρa. ARTMAP is hereby a type of self-organizing expert system that calibrates the selectivity of its hypotheses based upon predictive success. As a result, rare but important events can be quickly and sharply distinguished even if they are similar to frequent events with different consequences. Between input trials ρa relaxes to a baseline vigilance pa When ρa is large, the system runs in a conservative mode, wherein predictions are made only if the system is confident of the outcome. Very few false-alarm errors then occur at any stage of learning, yet the system reaches asymptote with no loss of speed. Because ARTMAP learning is self stabilizing, it can continue learning one or more databases, without degrading its corpus of memories, until its full memory capacity is utilized.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Aim: To investigate (a) variability in powder/liquid proportioning (b) effect of the extremes of any such variability on diametral tensile strength (DTS), in a commercial zinc phosphate cement. Statistical analyses (a = 0.05) were by Student's t-test in the case of powder/liquid ratio and one-way ANOVA and Tukey HSD for for pair-wise comparisons of mean DTS. The Null hypotheses were that (a) the powder-liquid mixing ratios observed would not differ from the manufacturer's recommended ratio (b) DTS of the set cement samples using the extreme powder/liquid ratios observed would not differ from those made using the manufacturer's recommended ratio. Methodology: Thirty-four undergraduate dental students dispensed the components according to the manufacturer's instructions. The maximum and minimum powder/liquid ratios (m/m), together with the manufacturer's recommended ratio (m/m), were used to prepare cylindrical samples (n = 3 x 34) for DTS testing. Results: Powder/liquid ratios ranged from 2.386 to 1.018.The mean ratio (1.644 (341) m/m) was not significantly different from the manufacturer's recommended value of 1.718 (p=0.189). DTS values for the maximum and minimum ratios (m/m), respectively, were both significantly different from each other (p<0.001) and from the mean value obtained from the manufacturer's recommended ratio (m/m) (p<0.001). Conclusions: Variability exists in powder/liquid ratio (m/m) for hand dispensed zinc phosphate cement. This variability can affect the DTS of the set material.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A model for representing music scores in a form suitable for general processing by a music-analyst-programmer is proposed and implemented. Typical input to the model consists of one or more pieces of music which are encoded in a file-based score representation. File-based representations are in a form unsuited for general processing, as they do not provide a suitable level of abstraction for a programmer-analyst. Instead, a representation is created giving a programmer's view of the score. This frees the analyst-programmer from implementation details, that otherwise would form a substantial barrier to progress. The score representation uses an object-oriented approach to create a natural and robust software environment for the musicologist. The system is used to explore ways in which it could benefit musicologists. Methodologies for analysing music corpora are presented in a series of analytic examples which illustrate some of the potential of this model. Proving hypotheses or performing analysis on corpora involves the construction of algorithms. Some unique aspects of using this score model for corpus-based musicology are: - Algorithms impose a discipline which arises from the necessity for formalism. - Automatic analysis enables musicologists to complete tasks that otherwise would be infeasible because of limitations of their energy, attentiveness, accuracy and time.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis is concerned with uniformly convergent finite element and finite difference methods for numerically solving singularly perturbed two-point boundary value problems. We examine the following four problems: (i) high order problem of reaction-diffusion type; (ii) high order problem of convection-diffusion type; (iii) second order interior turning point problem; (iv) semilinear reaction-diffusion problem. Firstly, we consider high order problems of reaction-diffusion type and convection-diffusion type. Under suitable hypotheses, the coercivity of the associated bilinear forms is proved and representation results for the solutions of such problems are given. It is shown that, on an equidistant mesh, polynomial schemes cannot achieve a high order of convergence which is uniform in the perturbation parameter. Piecewise polynomial Galerkin finite element methods are then constructed on a Shishkin mesh. High order convergence results, which are uniform in the perturbation parameter, are obtained in various norms. Secondly, we investigate linear second order problems with interior turning points. Piecewise linear Galerkin finite element methods are generated on various piecewise equidistant meshes designed for such problems. These methods are shown to be convergent, uniformly in the singular perturbation parameter, in a weighted energy norm and the usual L2 norm. Finally, we deal with a semilinear reaction-diffusion problem. Asymptotic properties of solutions to this problem are discussed and analysed. Two simple finite difference schemes on Shishkin meshes are applied to the problem. They are proved to be uniformly convergent of second order and fourth order respectively. Existence and uniqueness of a solution to both schemes are investigated. Numerical results for the above methods are presented.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The healthcare industry is beginning to appreciate the benefits which can be obtained from using Mobile Health Systems (MHS) at the point-of-care. As a result, healthcare organisations are investing heavily in mobile health initiatives with the expectation that users will employ the system to enhance performance. Despite widespread endorsement and support for the implementation of MHS, empirical evidence surrounding the benefits of MHS remains to be fully established. For MHS to be truly valuable, it is argued that the technological tool be infused within healthcare practitioners work practices and used to its full potential in post-adoptive scenarios. Yet, there is a paucity of research focusing on the infusion of MHS by healthcare practitioners. In order to address this gap in the literature, the objective of this study is to explore the determinants and outcomes of MHS infusion by healthcare practitioners. This research study adopts a post-positivist theory building approach to MHS infusion. Existing literature is utilised to develop a conceptual model by which the research objective is explored. Employing a mixed-method approach, this conceptual model is first advanced through a case study in the UK whereby propositions established from the literature are refined into testable hypotheses. The final phase of this research study involves the collection of empirical data from a Canadian hospital which supports the refined model and its associated hypotheses. The results from both phases of data collection are employed to develop a model of MHS infusion. The study contributes to IS theory and practice by: (1) developing a model with six determinants (Availability, MHS Self-Efficacy, Time-Criticality, Habit, Technology Trust, and Task Behaviour) and individual performance-related outcomes of MHS infusion (Effectiveness, Efficiency, and Learning), (2) examining undocumented determinants and relationships, (3) identifying prerequisite conditions that both healthcare practitioners and organisations can employ to assist with MHS infusion, (4) developing a taxonomy that provides conceptual refinement of IT infusion, and (5) informing healthcare organisations and vendors as to the performance of MHS in post-adoptive scenarios.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The advent of modern wireless technologies has seen a shift in focus towards the design and development of educational systems for deployment through mobile devices. The use of mobile phones, tablets and Personal Digital Assistants (PDAs) is steadily growing across the educational sector as a whole. Mobile learning (mLearning) systems developed for deployment on such devices hold great significance for the future of education. However, mLearning systems must be built around the particular learner’s needs based on both their motivation to learn and subsequent learning outcomes. This thesis investigates how biometric technologies, in particular accelerometer and eye-tracking technologies, could effectively be employed within the development of mobile learning systems to facilitate the needs of individual learners. The creation of personalised learning environments must enable the achievement of improved learning outcomes for users, particularly at an individual level. Therefore consideration is given to individual learning-style differences within the electronic learning (eLearning) space. The overall area of eLearning is considered and areas such as biometric technology and educational psychology are explored for the development of personalised educational systems. This thesis explains the basis of the author’s hypotheses and presents the results of several studies carried out throughout the PhD research period. These results show that both accelerometer and eye-tracking technologies can be employed as an Human Computer Interaction (HCI) method in the detection of student learning-styles to facilitate the provision of automatically adapted eLearning spaces. Finally the author provides recommendations for developers in the creation of adaptive mobile learning systems through the employment of biometric technology as a user interaction tool within mLearning applications. Further research paths are identified and a roadmap for future of research in this area is defined.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Silurian-Devonian Galway Granite Complex (GGC ~425-380Ma) is defined here as a suite of granitoid plutons that comprise the Main Galway Granite Batholith and the Earlier Plutons. The Main Batholith is a composite of the Carna Pluton in the west and the Kilkieran Pluton in the east and extends from Galway City ~130km to the west. The Earlier Plutons are spatially, temporally and structurally distinct, situated northwest of the Main Batholith and include the Roundstone, Omey, Inis and Letterfrack Plutons. The majority of isotopic and structural data currently available pertain to the Kilkieran Pluton, several tectonic models have already been devised for this part of the complex. These relate emplacement of the Kilkieran Pluton to extension across a large east-west Caledonian lineament, i.e. the Skird Rocks Fault, during late Caledonian transtension. No chronological data have been published that directly and accurately date the emplacement of the Carna Pluton or any of the Earlier Plutons. There is also a lack of data pertaining to the internal structure of these intrusions. Accordingly, no previous study has established the mechanisms of emplacement for the Earlier Plutons and only limited work is available for the Carna Pluton. As a consequense of this, constituents of the GGC have not previously been placed in a context relative to each other or to regional scale Silurio-Devonian kinematics. The current work focuses on the Omey, Roundstone and Carna Plutons. Here, results of detailed field and Anisotropy of Magnetic Susceptibiliy (AMS) fabric studies are presented. This work is complemented by geological mapping that focuses on fault dynamics and contact relationships. Interpretation of AMS data is aided by rock magnetic experiment data and petrographic microstructural evaluations of representative samples. A new geological map of the the Omey Pluton demonstrates that this intrusion has a defined roof and base which are gently inclined parallel to the fold hinge of the Connemara Antiform. AMS and petrographic data show the intrusion is cross cut by NNW-SSE shear zones that extend into the country rock. These pre-date and were active during magma emplacement. It is proposed that the Omey pluton was emplaced as a discordant phacolith. Pre-existing subvertical D5 faults in the host rock were reactived during emplacement, due to regional sinistral transpression, and served as centralised ascent conduits. A central portion of the Roundstone Pluton was mapped in detail for the first time. Two facies are identified, G1 forms the majority of the pluton and coeval G2 sheets cross cut G1 at the core of the pluton. NNW-SSE D5 faults mapped in the country rock extend across the pluton. These share a geometrical relationship with the distribution of submagmatic strain in the pluton and parallel the majoity of mapped subvertical G2 dykes. These data indicate that magma ascent was controlled by NNW-SSE conduits that are inherently related to those identifed in the Omey Pluton. It is proposed that the Roundstone Pluton is a punched laccolith, the symmetry and structure of which was controlled by pre-exising host rock structures and regional sinistral transpressive stress which presided during emplacement. Field relationships show the long axis of the Carna Pluton lies parallel to mulitple NNW-SSE shear zones. These are represented on a regional scale by the Clifden-Mace Fault which cross cuts the core of this intrusion. AMS and petrographic data show concentric emplacement fabrics were tectonically overprinted as magma cooled from the magmatic state due to this faulting. It is proposed that the Clifden-Mace Fault system was active during ascent and emplacement of the magma and that pluton inflation only terminated as this controlling structure went into compression due to the onset of regional transtension. U-Pb zircon laser ablation inductively coupled mass spectrometry (LA-ICP-MS) data has been compiled from four sample sites. New geochronological data from the Roundstone Pluton (RD1 = ± 3.2Ma) represent the oldest age determination obtained from any member of the GGC and demonstrates that this pluton predates the Carna Pluton by ~10Ma and probably intruded synchronously with the Omey Pluton (~422.5 ± 1.7Ma). Chronological data from the Carna Pluton (CN2 = 412.9 ± 2.5Ma; CN3 = 409.8 ± 7.2Ma; CN4 = 409.6 ± 3.6Ma) represent the first precise magma crystallisation age for this intrusion. This work shows this pluton is 10Ma older than the Kilkieran Pluton and that the supply of magma into the Carna Pluton had terminated by ~409Ma. Chronological, magnetic and field data have been utilised to evaluate the kinematic evolution of the Caledonides of western Ireland throughout the construction of the GGC. It is proposed that the GGC was constructed during four distinct episodes. The style of emplacement and the conduits used for magma transport to the site of emplacement was dependent on the orientation of local structures relative to the regional ambiant stress field. This philosophy is used to critically evaluate and progress existing hypotheses on the transition from regional transpression to regional transtension at the end of the Caledonian Orogeny.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

One problem in most three-dimensional (3D) scalar data visualization techniques is that they often overlook to depict uncertainty that comes with the 3D scalar data and thus fail to faithfully present the 3D scalar data and have risks which may mislead users’ interpretations, conclusions or even decisions. Therefore this thesis focuses on the study of uncertainty visualization in 3D scalar data and we seek to create better uncertainty visualization techniques, as well as to find out the advantages/disadvantages of those state-of-the-art uncertainty visualization techniques. To do this, we address three specific hypotheses: (1) the proposed Texture uncertainty visualization technique enables users to better identify scalar/error data, and provides reduced visual overload and more appropriate brightness than four state-of-the-art uncertainty visualization techniques, as demonstrated using a perceptual effectiveness user study. (2) The proposed Linked Views and Interactive Specification (LVIS) uncertainty visualization technique enables users to better search max/min scalar and error data than four state-of-the-art uncertainty visualization techniques, as demonstrated using a perceptual effectiveness user study. (3) The proposed Probabilistic Query uncertainty visualization technique, in comparison to traditional Direct Volume Rendering (DVR) methods, enables radiologists/physicians to better identify possible alternative renderings relevant to a diagnosis and the classification probabilities associated to the materials appeared on these renderings; this leads to improved decision support for diagnosis, as demonstrated in the domain of medical imaging. For each hypothesis, we test it by following/implementing a unified framework that consists of three main steps: the first main step is uncertainty data modeling, which clearly defines and generates certainty types of uncertainty associated to given 3D scalar data. The second main step is uncertainty visualization, which transforms the 3D scalar data and their associated uncertainty generated from the first main step into two-dimensional (2D) images for insight, interpretation or communication. The third main step is evaluation, which transforms the 2D images generated from the second main step into quantitative scores according to specific user tasks, and statistically analyzes the scores. As a result, the quality of each uncertainty visualization technique is determined.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Existing work in Computer Science and Electronic Engineering demonstrates that Digital Signal Processing techniques can effectively identify the presence of stress in the speech signal. These techniques use datasets containing real or actual stress samples i.e. real-life stress such as 911 calls and so on. Studies that use simulated or laboratory-induced stress have been less successful and inconsistent. Pervasive, ubiquitous computing is increasingly moving towards voice-activated and voice-controlled systems and devices. Speech recognition and speaker identification algorithms will have to improve and take emotional speech into account. Modelling the influence of stress on speech and voice is of interest to researchers from many different disciplines including security, telecommunications, psychology, speech science, forensics and Human Computer Interaction (HCI). The aim of this work is to assess the impact of moderate stress on the speech signal. In order to do this, a dataset of laboratory-induced stress is required. While attempting to build this dataset it became apparent that reliably inducing measurable stress in a controlled environment, when speech is a requirement, is a challenging task. This work focuses on the use of a variety of stressors to elicit a stress response during tasks that involve speech content. Biosignal analysis (commercial Brain Computer Interfaces, eye tracking and skin resistance) is used to verify and quantify the stress response, if any. This thesis explains the basis of the author’s hypotheses on the elicitation of affectively-toned speech and presents the results of several studies carried out throughout the PhD research period. These results show that the elicitation of stress, particularly the induction of affectively-toned speech, is not a simple matter and that many modulating factors influence the stress response process. A model is proposed to reflect the author’s hypothesis on the emotional response pathways relating to the elicitation of stress with a required speech content. Finally the author provides guidelines and recommendations for future research on speech under stress. Further research paths are identified and a roadmap for future research in this area is defined.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The purpose of this study is to explore aspects of social organisation during the Upper Palaeolithic and Mesolithic periods using craniometric data. Different hypotheses were tested using geometric morphometrics, alongside traditional craniometric data. The clustering of individuals from the same site, as well as a correspondence to an isolation-by-distance model—particular in the Mesolithic samples—points to population structure within these groups. Moreover, discontinuities in cranial traits between the early Upper Palaeolithic and later periods could suggest that the Last Glacial Maximum had a disruptive effect on populations in Europe. Differences in social organisation can often result from cultural norms regarding post-marital residence. Such differences can be tested by comparing cranial data to that of geographic information. Greater variation in male cranial traits relative to females, after controlling for location, suggests that the overall pattern of residence during the Upper Palaeolithic and Mesolithic was one of matrilocality. It has been suggested that coastal occupation was density dependent and these populations show a greater degree of sedentism than their inland counterparts. Moreover, it has been proposed that coastal areas were not continuously occupied until the Late Pleistocene due to spatial restrictions that would adversely affect reproductive opportunities. This study corroborates the pattern seen in cranial traits corresponded with that of a more sedentary population. The results are consistent with the hypothesis that coastal populations are more sedentary than inland populations during these periods. This study adds new information regarding the social dynamics of prehistoric populations in Europe and sheds light on some of the conditions that may have paved the way for the transition to agriculture