960 resultados para Spatial models
Resumo:
Most authors struggle to pick a title that adequately conveys all of the material covered in a book. When I first saw Applied Spatial Data Analysis with R, I expected a review of spatial statistical models and their applications in packages (libraries) from the CRAN site of R. The authors’ title is not misleading, but I was very pleasantly surprised by how deep the word “applied” is here. The first half of the book essentially covers how R handles spatial data. To some statisticians this may be boring. Do you want, or need, to know the difference between S3 and S4 classes, how spatial objects in R are organized, and how various methods work on the spatial objects? A few years ago I would have said “no,” especially to the “want” part. Just let me slap my EXCEL spreadsheet into R and run some spatial functions on it. Unfortunately, the world is not so simple, and ultimately we want to minimize effort to get all of our spatial analyses accomplished. The first half of this book certainly convinced me that some extra effort in organizing my data into certain spatial class structures makes the analysis easier and less subject to mistakes. I also admit that I found it very interesting and I learned a lot.
Generalizing the dynamic field theory of spatial cognition across real and developmental time scales
Resumo:
Within cognitive neuroscience, computational models are designed to provide insights into the organization of behavior while adhering to neural principles. These models should provide sufficient specificity to generate novel predictions while maintaining the generality needed to capture behavior across tasks and/or time scales. This paper presents one such model, the Dynamic Field Theory (DFT) of spatial cognition, showing new simulations that provide a demonstration proof that the theory generalizes across developmental changes in performance in four tasks—the Piagetian A-not-B task, a sandbox version of the A-not-B task, a canonical spatial recall task, and a position discrimination task. Model simulations demonstrate that the DFT can accomplish both specificity—generating novel, testable predictions—and generality—spanning multiple tasks across development with a relatively simple developmental hypothesis. Critically, the DFT achieves generality across tasks and time scales with no modification to its basic structure and with a strong commitment to neural principles. The only change necessary to capture development in the model was an increase in the precision of the tuning of receptive fields as well as an increase in the precision of local excitatory interactions among neurons in the model. These small quantitative changes were sufficient to move the model through a set of quantitative and qualitative behavioral changes that span the age range from 8 months to 6 years and into adulthood. We conclude by considering how the DFT is positioned in the literature, the challenges on the horizon for our framework, and how a dynamic field approach can yield new insights into development from a computational cognitive neuroscience perspective.
Resumo:
Stage-structured models that integrate demography and dispersal can be used to identify points in the life cycle with large effects on rates of population spatial spread, information that is vital in the development of containment strategies for invasive species. Current challenges in the application of these tools include: (1) accounting for large uncertainty in model parameters, which may violate assumptions of ‘‘local’’ perturbation metrics such as sensitivities and elasticities, and (2) forecasting not only asymptotic rates of spatial spread, as is usually done, but also transient spatial dynamics in the early stages of invasion. We developed an invasion model for the Diaprepes root weevil (DRW; Diaprepes abbreviatus [Coleoptera: Curculionidae]), a generalist herbivore that has invaded citrus-growing regions of the United States. We synthesized data on DRW demography and dispersal and generated predictions for asymptotic and transient peak invasion speeds, accounting for parameter uncertainty. We quantified the contributions of each parameter toward invasion speed using a ‘‘global’’ perturbation analysis, and we contrasted parameter contributions during the transient and asymptotic phases. We found that the asymptotic invasion speed was 0.02–0.028 km/week, although the transient peak invasion speed (0.03– 0.045 km/week) was significantly greater. Both asymptotic and transient invasions speeds were most responsive to weevil dispersal distances. However, demographic parameters that had large effects on asymptotic speed (e.g., survival of early-instar larvae) had little effect on transient speed. Comparison of the global analysis with lower-level elasticities indicated that local perturbation analysis would have generated unreliable predictions for the responsiveness of invasion speed to underlying parameters. Observed range expansion in southern Florida (1992–2006) was significantly lower than the invasion speed predicted by the model. Possible causes of this mismatch include overestimation of dispersal distances, demographic rates, and spatiotemporal variation in parameter values. This study demonstrates that, when parameter uncertainty is large, as is often the case, global perturbation analyses are needed to identify which points in the life cycle should be targets of management. Our results also suggest that effective strategies for reducing spread during the asymptotic phase may have little effect during the transient phase. Includes Appendix.
Resumo:
Stochastic methods based on time-series modeling combined with geostatistics can be useful tools to describe the variability of water-table levels in time and space and to account for uncertainty. Monitoring water-level networks can give information about the dynamic of the aquifer domain in both dimensions. Time-series modeling is an elegant way to treat monitoring data without the complexity of physical mechanistic models. Time-series model predictions can be interpolated spatially, with the spatial differences in water-table dynamics determined by the spatial variation in the system properties and the temporal variation driven by the dynamics of the inputs into the system. An integration of stochastic methods is presented, based on time-series modeling and geostatistics as a framework to predict water levels for decision making in groundwater management and land-use planning. The methodology is applied in a case study in a Guarani Aquifer System (GAS) outcrop area located in the southeastern part of Brazil. Communication of results in a clear and understandable form, via simulated scenarios, is discussed as an alternative, when translating scientific knowledge into applications of stochastic hydrogeology in large aquifers with limited monitoring network coverage like the GAS.
Resumo:
In this work, we employ renormalization group methods to study the general behavior of field theories possessing anisotropic scaling in the spacetime variables. The Lorentz symmetry breaking that accompanies these models are either soft, if no higher spatial derivative is present, or it may have a more complex structure if higher spatial derivatives are also included. Both situations are discussed in models with only scalar fields and also in models with fermions as a Yukawa-like model.
Resumo:
Questions Does the spatial association between isolated adult trees and understorey plants change along a gradient of sand dunes? Does this association depend on the life form of the understorey plant? Location Coastal sand dunes, southeast Brazil. Methods We recorded the occurrence of understorey plant species in 100 paired 0.25 m2 plots under adult trees and in adjacent treeless sites along an environmental gradient from beach to inland. Occurrence probabilities were modelled as a function of the fixed variables of the presence of a neighbour, distance from the seashore and life form, and a random variable, the block (i.e. the pair of plots). Generalized linear mixed models (GLMM) were fitted in a backward step-wise procedure using Akaike's information criterion (AIC) for model selection. Results The occurrence of understorey plants was affected by the presence of an adult tree neighbour, but the effect varied with the life form of the understorey species. Positive spatial association was found between isolated adult neighbour and young trees, whereas a negative association was found for shrubs. Moreover, a neutral association was found for lianas, whereas for herbs the effect of the presence of an adult neighbour ranged from neutral to negative, depended on the subgroup considered. The strength of the negative association with forbs increased with distance from the seashore. However, for the other life forms, the associational pattern with adult trees did not change along the gradient. Conclusions For most of the understorey life forms there is no evidence that the spatial association between isolated adult trees and understorey plants changes with the distance from the seashore, as predicted by the stress gradient hypothesis, a common hypothesis in the literature about facilitation in plant communities. Furthermore, the positive spatial association between isolated adult trees and young trees identified along the entire gradient studied indicates a positive feedback that explains the transition from open vegetation to forest in subtropical coastal dune environments.
Resumo:
A procedure has been proposed by Ciotti and Bricaud (2006) to retrieve spectral absorption coefficients of phytoplankton and colored detrital matter (CDM) from satellite radiance measurements. This was also the first procedure to estimate a size factor for phytoplankton, based on the shape of the retrieved algal absorption spectrum, and the spectral slope of CDM absorption. Applying this method to the global ocean color data set acquired by SeaWiFS over twelve years (1998-2009), allowed for a comparison of the spatial variations of chlorophyll concentration ([Chl]), algal size factor (S-f), CDM absorption coefficient (a(cdm)) at 443 nm, and spectral slope of CDM absorption (S-cdm). As expected, correlations between the derived parameters were characterized by a large scatter at the global scale. We compared temporal variability of the spatially averaged parameters over the twelve-year period for three oceanic areas of biogeochemical importance: the Eastern Equatorial Pacific, the North Atlantic and the Mediterranean Sea. In all areas, both S-f and a(cdm)(443) showed large seasonal and interannual variations, generally correlated to those of algal biomass. The CDM maxima appeared in some occasions to last longer than those of [Chl]. The spectral slope of CDM absorption showed very large seasonal cycles consistent with photobleaching, challenging the assumption of a constant slope commonly used in bio-optical models. In the Equatorial Pacific, the seasonal cycles of [Chl], S-f, a(cdm)(443) and S-cdm, as well as the relationships between these parameters, were strongly affected by the 1997-98 El Ni o/La Ni a event.
Resumo:
This study aimed to evaluate the spatial variability of leaf content of macro and micronutrients. The citrus plants orchard with 5 years of age, planted at regular intervals of 8 x 7 m, was managed under drip irrigation. Leaf samples were collected from each plant to be analyzed in the laboratory. Data were analyzed using the software R, version 2.5.1 Copyright (C) 2007, along with geostatistics package GeoR. All contents of macro and micronutrients studied were adjusted to normal distribution and showed spatial dependence.The best-fit models, based on the likelihood, for the macro and micronutrients were the spherical and matern. It is suggest for the macronutrients nitrogen, phosphorus, potassium, calcium, magnesium and sulfur the minimum distances between samples of 37; 58; 29; 63; 46 and 15 m respectively, while for the micronutrients boron, copper, iron, manganese and zinc, the distances suggests are 29; 9; 113; 35 and 14 m, respectively.
Resumo:
The maintenance of biodiversity is a long standing puzzle in ecology. It is a classical result that if the interactions of the species in an ecosystem are chosen in a random way, then complex ecosystems can't sustain themselves, meaning that the structure of the interactions between the species must be a central component on the preservation of biodiversity and on the stability of ecosystems. The rock-paper-scissors model is one of the paradigmatic models that study how biodiversity is maintained. In this model 3 species dominate each other in a cyclic way (mimicking a trophic cycle), that is, rock dominates scissors, that dominates paper, that dominates rock. In the original version of this model, this dominance obeys a 'Z IND 3' symmetry, in the sense that the strength of dominance is always the same. In this work, we break this symmetry, studying the effects of the addition of an asymmetry parameter. In the usual model, in a two dimensional lattice, the species distribute themselves according to spiral patterns, that can be explained by the complex Landau-Guinzburg equation. With the addition of asymmetry, new spatial patterns appear during the transient and the system either ends in a state with spirals, similar to the ones of the original model, or in a state where unstable spatial patterns dominate or in a state where only one species survives (and biodiversity is lost).
Resumo:
Persistent organic pollutants (POPs) is a group of chemicals that are toxic, undergo long-range transport and accumulate in biota. Due to their persistency the distribution and recirculation in the environment often continues for a long period of time. Thereby they appear virtually everywhere within the biosphere, and poses a toxic stress to living organisms. In this thesis, attempts are made to contribute to the understanding of factors that influence the distribution of POPs with focus on processes in the marine environment. The bioavailability and the spatial distribution are central topics for the environmental risk management of POPs. In order to study these topics, various field studies were undertaken. To determine the bioavailable fraction of polychlorinated dibenzo-p-dioxins and dibenzofurans (PCDD/Fs), polychlorinated naphthalenes (PCNs), and polychlorinated biphenyls (PCBs) the aqueous dissolved phase were sampled and analysed. In the same samples, we also measured how much of these POPs were associated with suspended particles. Different models, which predicted the phase distribution of these POPs, were then evaluated. It was found that important water characteristics, which influenced the solid-water phase distribution of POPs, were particulate organic matter (POM), particulate soot (PSC), and dissolved organic matter (DOM). The bioavailable dissolved POP-phase in the water was lower when these sorbing phases were present. Furthermore, sediments were sampled and the spatial distribution of the POPs was examined. The results showed that the concentration of PCDD/Fs, and PCNs were better described using PSC- than using POM-content of the sediment. In parallel with these field studies, we synthesized knowledge of the processes affecting the distribution of POPs in a multimedia mass balance model. This model predicted concentrations of PCDD/Fs throughout our study area, the Grenlandsfjords in Norway, within factors of ten. This makes the model capable to validate the effect of suitable remedial actions in order to decrease the exposure of these POPs to biota in the Grenlandsfjords which was the aim of the project. Also, to evaluate the influence of eutrophication on the marine occurrence PCB data from the US Musselwatch and Benthic Surveillance Programs are examined in this thesis. The dry weight based concentrations of PCB in bivalves were found to correlate positively to the organic matter content of nearby sediments, and organic matter based concentrations of PCB in sediments were negatively correlated to the organic matter content of the sediment.
Resumo:
[EN] In this work we propose a new variational model for the consistent estimation of motion fields. The aim of this work is to develop appropriate spatio-temporal coherence models. In this sense, we propose two main contributions: a nonlinear flow constancy assumption, similar in spirit to the nonlinear brightness constancy assumption, which conveniently relates flow fields at different time instants; and a nonlinear temporal regularization scheme, which complements the spatial regularization and can cope with piecewise continuous motion fields. These contributions pose a congruent variational model since all the energy terms, except the spatial regularization, are based on nonlinear warpings of the flow field. This model is more general than its spatial counterpart, provides more accurate solutions and preserves the continuity of optical flows in time. In the experimental results, we show that the method attains better results and, in particular, it considerably improves the accuracy in the presence of large displacements.
Resumo:
Traditional software engineering approaches and metaphors fall short when applied to areas of growing relevance such as electronic commerce, enterprise resource planning, and mobile computing: such areas, in fact, generally call for open architectures that may evolve dynamically over time so as to accommodate new components and meet new requirements. This is probably one of the main reasons that the agent metaphor and the agent-oriented paradigm are gaining momentum in these areas. This thesis deals with the engineering of complex software systems in terms of the agent paradigm. This paradigm is based on the notions of agent and systems of interacting agents as fundamental abstractions for designing, developing and managing at runtime typically distributed software systems. However, today the engineer often works with technologies that do not support the abstractions used in the design of the systems. For this reason the research on methodologies becomes the basic point in the scientific activity. Currently most agent-oriented methodologies are supported by small teams of academic researchers, and as a result, most of them are in an early stage and still in the first context of mostly \academic" approaches for agent-oriented systems development. Moreover, such methodologies are not well documented and very often defined and presented only by focusing on specific aspects of the methodology. The role played by meta- models becomes fundamental for comparing and evaluating the methodologies. In fact a meta-model specifies the concepts, rules and relationships used to define methodologies. Although it is possible to describe a methodology without an explicit meta-model, formalising the underpinning ideas of the methodology in question is valuable when checking its consistency or planning extensions or modifications. A good meta-model must address all the different aspects of a methodology, i.e. the process to be followed, the work products to be generated and those responsible for making all this happen. In turn, specifying the work products that must be developed implies dening the basic modelling building blocks from which they are built. As a building block, the agent abstraction alone is not enough to fully model all the aspects related to multi-agent systems in a natural way. In particular, different perspectives exist on the role that environment plays within agent systems: however, it is clear at least that all non-agent elements of a multi-agent system are typically considered to be part of the multi-agent system environment. The key role of environment as a first-class abstraction in the engineering of multi-agent system is today generally acknowledged in the multi-agent system community, so environment should be explicitly accounted for in the engineering of multi-agent system, working as a new design dimension for agent-oriented methodologies. At least two main ingredients shape the environment: environment abstractions - entities of the environment encapsulating some functions -, and topology abstractions - entities of environment that represent the (either logical or physical) spatial structure. In addition, the engineering of non-trivial multi-agent systems requires principles and mechanisms for supporting the management of the system representation complexity. These principles lead to the adoption of a multi-layered description, which could be used by designers to provide different levels of abstraction over multi-agent systems. The research in these fields has lead to the formulation of a new version of the SODA methodology where environment abstractions and layering principles are exploited for en- gineering multi-agent systems.
Resumo:
The presented study carried out an analysis on rural landscape changes. In particular the study focuses on the understanding of driving forces acting on the rural built environment using a statistical spatial model implemented through GIS techniques. It is well known that the study of landscape changes is essential for a conscious decision making in land planning. From a bibliography review results a general lack of studies dealing with the modeling of rural built environment and hence a theoretical modelling approach for such purpose is needed. The advancement in technology and modernity in building construction and agriculture have gradually changed the rural built environment. In addition, the phenomenon of urbanization of a determined the construction of new volumes that occurred beside abandoned or derelict rural buildings. Consequently there are two types of transformation dynamics affecting mainly the rural built environment that can be observed: the conversion of rural buildings and the increasing of building numbers. It is the specific aim of the presented study to propose a methodology for the development of a spatial model that allows the identification of driving forces that acted on the behaviours of the building allocation. In fact one of the most concerning dynamic nowadays is related to an irrational expansion of buildings sprawl across landscape. The proposed methodology is composed by some conceptual steps that cover different aspects related to the development of a spatial model: the selection of a response variable that better describe the phenomenon under study, the identification of possible driving forces, the sampling methodology concerning the collection of data, the most suitable algorithm to be adopted in relation to statistical theory and method used, the calibration process and evaluation of the model. A different combination of factors in various parts of the territory generated favourable or less favourable conditions for the building allocation and the existence of buildings represents the evidence of such optimum. Conversely the absence of buildings expresses a combination of agents which is not suitable for building allocation. Presence or absence of buildings can be adopted as indicators of such driving conditions, since they represent the expression of the action of driving forces in the land suitability sorting process. The existence of correlation between site selection and hypothetical driving forces, evaluated by means of modeling techniques, provides an evidence of which driving forces are involved in the allocation dynamic and an insight on their level of influence into the process. GIS software by means of spatial analysis tools allows to associate the concept of presence and absence with point futures generating a point process. Presence or absence of buildings at some site locations represent the expression of these driving factors interaction. In case of presences, points represent locations of real existing buildings, conversely absences represent locations were buildings are not existent and so they are generated by a stochastic mechanism. Possible driving forces are selected and the existence of a causal relationship with building allocations is assessed through a spatial model. The adoption of empirical statistical models provides a mechanism for the explanatory variable analysis and for the identification of key driving variables behind the site selection process for new building allocation. The model developed by following the methodology is applied to a case study to test the validity of the methodology. In particular the study area for the testing of the methodology is represented by the New District of Imola characterized by a prevailing agricultural production vocation and were transformation dynamic intensively occurred. The development of the model involved the identification of predictive variables (related to geomorphologic, socio-economic, structural and infrastructural systems of landscape) capable of representing the driving forces responsible for landscape changes.. The calibration of the model is carried out referring to spatial data regarding the periurban and rural area of the study area within the 1975-2005 time period by means of Generalised linear model. The resulting output from the model fit is continuous grid surface where cells assume values ranged from 0 to 1 of probability of building occurrences along the rural and periurban area of the study area. Hence the response variable assesses the changes in the rural built environment occurred in such time interval and is correlated to the selected explanatory variables by means of a generalized linear model using logistic regression. Comparing the probability map obtained from the model to the actual rural building distribution in 2005, the interpretation capability of the model can be evaluated. The proposed model can be also applied to the interpretation of trends which occurred in other study areas, and also referring to different time intervals, depending on the availability of data. The use of suitable data in terms of time, information, and spatial resolution and the costs related to data acquisition, pre-processing, and survey are among the most critical aspects of model implementation. Future in-depth studies can focus on using the proposed model to predict short/medium-range future scenarios for the rural built environment distribution in the study area. In order to predict future scenarios it is necessary to assume that the driving forces do not change and that their levels of influence within the model are not far from those assessed for the time interval used for the calibration.
Resumo:
In this work we aim to propose a new approach for preliminary epidemiological studies on Standardized Mortality Ratios (SMR) collected in many spatial regions. A preliminary study on SMRs aims to formulate hypotheses to be investigated via individual epidemiological studies that avoid bias carried on by aggregated analyses. Starting from collecting disease counts and calculating expected disease counts by means of reference population disease rates, in each area an SMR is derived as the MLE under the Poisson assumption on each observation. Such estimators have high standard errors in small areas, i.e. where the expected count is low either because of the low population underlying the area or the rarity of the disease under study. Disease mapping models and other techniques for screening disease rates among the map aiming to detect anomalies and possible high-risk areas have been proposed in literature according to the classic and the Bayesian paradigm. Our proposal is approaching this issue by a decision-oriented method, which focus on multiple testing control, without however leaving the preliminary study perspective that an analysis on SMR indicators is asked to. We implement the control of the FDR, a quantity largely used to address multiple comparisons problems in the eld of microarray data analysis but which is not usually employed in disease mapping. Controlling the FDR means providing an estimate of the FDR for a set of rejected null hypotheses. The small areas issue arises diculties in applying traditional methods for FDR estimation, that are usually based only on the p-values knowledge (Benjamini and Hochberg, 1995; Storey, 2003). Tests evaluated by a traditional p-value provide weak power in small areas, where the expected number of disease cases is small. Moreover tests cannot be assumed as independent when spatial correlation between SMRs is expected, neither they are identical distributed when population underlying the map is heterogeneous. The Bayesian paradigm oers a way to overcome the inappropriateness of p-values based methods. Another peculiarity of the present work is to propose a hierarchical full Bayesian model for FDR estimation in testing many null hypothesis of absence of risk.We will use concepts of Bayesian models for disease mapping, referring in particular to the Besag York and Mollié model (1991) often used in practice for its exible prior assumption on the risks distribution across regions. The borrowing of strength between prior and likelihood typical of a hierarchical Bayesian model takes the advantage of evaluating a singular test (i.e. a test in a singular area) by means of all observations in the map under study, rather than just by means of the singular observation. This allows to improve the power test in small areas and addressing more appropriately the spatial correlation issue that suggests that relative risks are closer in spatially contiguous regions. The proposed model aims to estimate the FDR by means of the MCMC estimated posterior probabilities b i's of the null hypothesis (absence of risk) for each area. An estimate of the expected FDR conditional on data (\FDR) can be calculated in any set of b i's relative to areas declared at high-risk (where thenull hypothesis is rejected) by averaging the b i's themselves. The\FDR can be used to provide an easy decision rule for selecting high-risk areas, i.e. selecting as many as possible areas such that the\FDR is non-lower than a prexed value; we call them\FDR based decision (or selection) rules. The sensitivity and specicity of such rule depend on the accuracy of the FDR estimate, the over-estimation of FDR causing a loss of power and the under-estimation of FDR producing a loss of specicity. Moreover, our model has the interesting feature of still being able to provide an estimate of relative risk values as in the Besag York and Mollié model (1991). A simulation study to evaluate the model performance in FDR estimation accuracy, sensitivity and specificity of the decision rule, and goodness of estimation of relative risks, was set up. We chose a real map from which we generated several spatial scenarios whose counts of disease vary according to the spatial correlation degree, the size areas, the number of areas where the null hypothesis is true and the risk level in the latter areas. In summarizing simulation results we will always consider the FDR estimation in sets constituted by all b i's selected lower than a threshold t. We will show graphs of the\FDR and the true FDR (known by simulation) plotted against a threshold t to assess the FDR estimation. Varying the threshold we can learn which FDR values can be accurately estimated by the practitioner willing to apply the model (by the closeness between\FDR and true FDR). By plotting the calculated sensitivity and specicity (both known by simulation) vs the\FDR we can check the sensitivity and specicity of the corresponding\FDR based decision rules. For investigating the over-smoothing level of relative risk estimates we will compare box-plots of such estimates in high-risk areas (known by simulation), obtained by both our model and the classic Besag York Mollié model. All the summary tools are worked out for all simulated scenarios (in total 54 scenarios). Results show that FDR is well estimated (in the worst case we get an overestimation, hence a conservative FDR control) in small areas, low risk levels and spatially correlated risks scenarios, that are our primary aims. In such scenarios we have good estimates of the FDR for all values less or equal than 0.10. The sensitivity of\FDR based decision rules is generally low but specicity is high. In such scenario the use of\FDR = 0:05 or\FDR = 0:10 based selection rule can be suggested. In cases where the number of true alternative hypotheses (number of true high-risk areas) is small, also FDR = 0:15 values are well estimated, and \FDR = 0:15 based decision rules gains power maintaining an high specicity. On the other hand, in non-small areas and non-small risk level scenarios the FDR is under-estimated unless for very small values of it (much lower than 0.05); this resulting in a loss of specicity of a\FDR = 0:05 based decision rule. In such scenario\FDR = 0:05 or, even worse,\FDR = 0:1 based decision rules cannot be suggested because the true FDR is actually much higher. As regards the relative risk estimation, our model achieves almost the same results of the classic Besag York Molliè model. For this reason, our model is interesting for its ability to perform both the estimation of relative risk values and the FDR control, except for non-small areas and large risk level scenarios. A case of study is nally presented to show how the method can be used in epidemiology.
Resumo:
Knowledge on how ligaments and articular surfaces guide passive motion at the human ankle joint complex is fundamental for the design of relevant surgical treatments. The dissertation presents a possible improvement of this knowledge by a new kinematic model of the tibiotalar articulation. In this dissertation two one-DOF spatial equivalent mechanisms are presented for the simulation of the passive motion of the human ankle joint: the 5-5 fully parallel mechanism and the fully parallel spherical wrist mechanism. These mechanisms are based on the main anatomical structures of the ankle joint, namely the talus/calcaneus and the tibio/fibula bones at their interface, and the TiCaL and CaFiL ligaments. In order to show the accuracy of the models and the efficiency of the proposed procedure, these mechanisms are synthesized from experimental data and the results are compared with those obtained both during experimental sessions and with data published in the literature. Experimental results proved the efficiency of the proposed new mechanisms to simulate the ankle passive motion and, at the same time, the potentiality of the mechanism to replicate the ankle’s main anatomical structures quite well. The new mechanisms represent a powerful tool for both pre-operation planning and new prosthesis design.