934 resultados para dynamic methods


Relevância:

30.00% 30.00%

Publicador:

Resumo:

How do infants learn word meanings? Research has established the impact of both parent and child behaviors on vocabulary development, however the processes and mechanisms underlying these relationships are still not fully understood. Much existing literature focuses on direct paths to word learning, demonstrating that parent speech and child gesture use are powerful predictors of later vocabulary. However, an additional body of research indicates that these relationships don’t always replicate, particularly when assessed in different populations, contexts, or developmental periods.

The current study examines the relationships between infant gesture, parent speech, and infant vocabulary over the course of the second year (10-22 months of age). Through the use of detailed coding of dyadic mother-child play interactions and a combination of quantitative and qualitative data analytic methods, the process of communicative development was explored. Findings reveal non-linear patterns of growth in both parent speech content and child gesture use. Analyses of contingency in dyadic interactions reveal that children are active contributors to communicative engagement through their use of gestures, shaping the type of input they receive from parents, which in turn influences child vocabulary acquisition. Recommendations for future studies and the use of nuanced methodologies to assess changes in the dynamic system of dyadic communication are discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The focus of this work is to develop and employ numerical methods that provide characterization of granular microstructures, dynamic fragmentation of brittle materials, and dynamic fracture of three-dimensional bodies.

We first propose the fabric tensor formalism to describe the structure and evolution of lithium-ion electrode microstructure during the calendaring process. Fabric tensors are directional measures of particulate assemblies based on inter-particle connectivity, relating to the structural and transport properties of the electrode. Applying this technique to X-ray computed tomography of cathode microstructure, we show that fabric tensors capture the evolution of the inter-particle contact distribution and are therefore good measures for the internal state of and electronic transport within the electrode.

We then shift focus to the development and analysis of fracture models within finite element simulations. A difficult problem to characterize in the realm of fracture modeling is that of fragmentation, wherein brittle materials subjected to a uniform tensile loading break apart into a large number of smaller pieces. We explore the effect of numerical precision in the results of dynamic fragmentation simulations using the cohesive element approach on a one-dimensional domain. By introducing random and non-random field variations, we discern that round-off error plays a significant role in establishing a mesh-convergent solution for uniform fragmentation problems. Further, by using differing magnitudes of randomized material properties and mesh discretizations, we find that employing randomness can improve convergence behavior and provide a computational savings.

The Thick Level-Set model is implemented to describe brittle media undergoing dynamic fragmentation as an alternative to the cohesive element approach. This non-local damage model features a level-set function that defines the extent and severity of degradation and uses a length scale to limit the damage gradient. In terms of energy dissipated by fracture and mean fragment size, we find that the proposed model reproduces the rate-dependent observations of analytical approaches, cohesive element simulations, and experimental studies.

Lastly, the Thick Level-Set model is implemented in three dimensions to describe the dynamic failure of brittle media, such as the active material particles in the battery cathode during manufacturing. The proposed model matches expected behavior from physical experiments, analytical approaches, and numerical models, and mesh convergence is established. We find that the use of an asymmetrical damage model to represent tensile damage is important to producing the expected results for brittle fracture problems.

The impact of this work is that designers of lithium-ion battery components can employ the numerical methods presented herein to analyze the evolving electrode microstructure during manufacturing, operational, and extraordinary loadings. This allows for enhanced designs and manufacturing methods that advance the state of battery technology. Further, these numerical tools have applicability in a broad range of fields, from geotechnical analysis to ice-sheet modeling to armor design to hydraulic fracturing.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

INTRODUCTION: Upper airway measurement can be important for the diagnosis of breathing disorders. Acoustic reflection (AR) is an accepted tool for studying the airway. Our objective was to investigate the differences between cone-beam computed tomography (CBCT) and AR in calculating airway volumes and areas. METHODS: Subjects with prescribed CBCT images as part of their records were also asked to have AR performed. A total of 59 subjects (mean age, 15 ± 3.8 years) had their upper airway (5 areas) measured from CBCT images, acoustic rhinometry, and acoustic pharyngometry. Volumes and minimal cross-sectional areas were extracted and compared with software. RESULTS: Intraclass correlation on 20 randomly selected subjects, remeasured 2 weeks apart, showed high reliability (r >0.77). Means of total nasal volume were significantly different between the 2 methods (P = 0.035), but anterior nasal volume and minimal cross-sectional area showed no differences (P = 0.532 and P = 0.066, respectively). Pharyngeal volume showed significant differences (P = 0.01) with high correlation (r = 0.755), whereas pharyngeal minimal cross-sectional area showed no differences (P = 0.109). The pharyngeal volume difference may not be considered clinically significant, since it is 758 mm3 for measurements showing means of 11,000 ± 4000 mm3. CONCLUSIONS: CBCT is an accurate method for measuring anterior nasal volume, nasal minimal cross-sectional area, pharyngeal volume, and pharyngeal minimal cross-sectional area.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

For water depths greater than 60m floating wind turbines will become the most economical option for generating offshore wind energy. Tension mooring stabilised units are one type of platform being considered by the offshore wind energy industry. The complex mooring arrangement used by this type of platform means that the dynamics are greatly effected by offsets in the positioning of the anchors. This paper examines the issue of tendon anchor position tolerances. The dynamic effects of three positional tolerances are analysed in survival state using the time domain FASTLink. The severe impact of worst case anchor positional offsets on platform and turbine survivability is shown. The worst anchor misposition combinations are highlighted and should be strongly avoided. Novel methods to mitigate this issue are presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

People go through their life making all kinds of decisions, and some of these decisions affect their demand for transportation, for example, their choices of where to live and where to work, how and when to travel and which route to take. Transport related choices are typically time dependent and characterized by large number of alternatives that can be spatially correlated. This thesis deals with models that can be used to analyze and predict discrete choices in large-scale networks. The proposed models and methods are highly relevant for, but not limited to, transport applications. We model decisions as sequences of choices within the dynamic discrete choice framework, also known as parametric Markov decision processes. Such models are known to be difficult to estimate and to apply to make predictions because dynamic programming problems need to be solved in order to compute choice probabilities. In this thesis we show that it is possible to explore the network structure and the flexibility of dynamic programming so that the dynamic discrete choice modeling approach is not only useful to model time dependent choices, but also makes it easier to model large-scale static choices. The thesis consists of seven articles containing a number of models and methods for estimating, applying and testing large-scale discrete choice models. In the following we group the contributions under three themes: route choice modeling, large-scale multivariate extreme value (MEV) model estimation and nonlinear optimization algorithms. Five articles are related to route choice modeling. We propose different dynamic discrete choice models that allow paths to be correlated based on the MEV and mixed logit models. The resulting route choice models become expensive to estimate and we deal with this challenge by proposing innovative methods that allow to reduce the estimation cost. For example, we propose a decomposition method that not only opens up for possibility of mixing, but also speeds up the estimation for simple logit models, which has implications also for traffic simulation. Moreover, we compare the utility maximization and regret minimization decision rules, and we propose a misspecification test for logit-based route choice models. The second theme is related to the estimation of static discrete choice models with large choice sets. We establish that a class of MEV models can be reformulated as dynamic discrete choice models on the networks of correlation structures. These dynamic models can then be estimated quickly using dynamic programming techniques and an efficient nonlinear optimization algorithm. Finally, the third theme focuses on structured quasi-Newton techniques for estimating discrete choice models by maximum likelihood. We examine and adapt switching methods that can be easily integrated into usual optimization algorithms (line search and trust region) to accelerate the estimation process. The proposed dynamic discrete choice models and estimation methods can be used in various discrete choice applications. In the area of big data analytics, models that can deal with large choice sets and sequential choices are important. Our research can therefore be of interest in various demand analysis applications (predictive analytics) or can be integrated with optimization models (prescriptive analytics). Furthermore, our studies indicate the potential of dynamic programming techniques in this context, even for static models, which opens up a variety of future research directions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper considers the analysis of data from randomized trials which offer a sequence of interventions and suffer from a variety of problems in implementation. In experiments that provide treatment in multiple periods (T>1), subjects have up to 2^{T}-1 counterfactual outcomes to be estimated to determine the full sequence of causal effects from the study. Traditional program evaluation and non-experimental estimators are unable to recover parameters of interest to policy makers in this setting, particularly if there is non-ignorable attrition. We examine these issues in the context of Tennessee's highly influential randomized class size study, Project STAR. We demonstrate how a researcher can estimate the full sequence of dynamic treatment effects using a sequential difference in difference strategy that accounts for attrition due to observables using inverse probability weighting M-estimators. These estimates allow us to recover the structural parameters of the small class effects in the underlying education production function and construct dynamic average treatment effects. We present a complete and different picture of the effectiveness of reduced class size and find that accounting for both attrition due to observables and selection due to unobservable is crucial and necessary with data from Project STAR

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this study, the authors propose simple methods to evaluate the achievable rates and outage probability of a cognitive radio (CR) link that takes into account the imperfectness of spectrum sensing. In the considered system, the CR transmitter and receiver correlatively sense and dynamically exploit the spectrum pool via dynamic frequency hopping. Under imperfect spectrum sensing, false-alarm and miss-detection occur which cause impulsive interference emerged from collisions due to the simultaneous spectrum access of primary and cognitive users. That makes it very challenging to evaluate the achievable rates. By first examining the static link where the channel is assumed to be constant over time, they show that the achievable rate using a Gaussian input can be calculated accurately through a simple series representation. In the second part of this study, they extend the calculation of the achievable rate to wireless fading environments. To take into account the effect of fading, they introduce a piece-wise linear curve fitting-based method to approximate the instantaneous achievable rate curve as a combination of linear segments. It is then demonstrated that the ergodic achievable rate in fast fading and the outage probability in slow fading can be calculated to achieve any given accuracy level.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This keynote presentation will report some of our research work and experience on the development and applications of relevant methods, models, systems and simulation techniques in support of different types and various levels of decision making for business, management and engineering. In particular, the following topics will be covered. Modelling, multi-agent-based simulation and analysis of the allocation management of carbon dioxide emission permits in China (Nanfeng Liu & Shuliang Li Agent-based simulation of the dynamic evolution of enterprise carbon assets (Yin Zeng & Shuliang Li) A framework & system for extracting and representing project knowledge contexts using topic models and dynamic knowledge maps: a big data perspective (Jin Xu, Zheng Li, Shuliang Li & Yanyan Zhang) Open innovation: intelligent model, social media & complex adaptive system simulation (Shuliang Li & Jim Zheng Li) A framework, model and software prototype for modelling and simulation for deshopping behaviour and how companies respond (Shawkat Rahman & Shuliang Li) Integrating multiple agents, simulation, knowledge bases and fuzzy logic for international marketing decision making (Shuliang Li & Jim Zheng Li) A Web-based hybrid intelligent system for combined conventional, digital, mobile, social media and mobile marketing strategy formulation (Shuliang Li & Jim Zheng Li) A hybrid intelligent model for Web & social media dynamics, and evolutionary and adaptive branding (Shuliang Li) A hybrid paradigm for modelling, simulation and analysis of brand virality in social media (Shuliang Li & Jim Zheng Li) Network configuration management: attack paradigms and architectures for computer network survivability (Tero Karvinen & Shuliang Li)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the recent years, vibration-based structural damage identification has been subject of significant research in structural engineering. The basic idea of vibration-based methods is that damage induces mechanical properties changes that cause anomalies in the dynamic response of the structure, which measures allow to localize damage and its extension. Vibration measured data, such as frequencies and mode shapes, can be used in the Finite Element Model Updating in order to adjust structural parameters sensible at damage (e.g. Young’s Modulus). The novel aspect of this thesis is the introduction into the objective function of accurate measures of strains mode shapes, evaluated through FBG sensors. After a review of the relevant literature, the case of study, i.e. an irregular prestressed concrete beam destined for roofing of industrial structures, will be presented. The mathematical model was built through FE models, studying static and dynamic behaviour of the element. Another analytical model was developed, based on the ‘Ritz method’, in order to investigate the possible interaction between the RC beam and the steel supporting table used for testing. Experimental data, recorded through the contemporary use of different measurement techniques (optical fibers, accelerometers, LVDTs) were compared whit theoretical data, allowing to detect the best model, for which have been outlined the settings for the updating procedure.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A three-dimensional finite volume, unstructured mesh (FV-UM) method for dynamic fluid–structure interaction (DFSI) is described. Fluid structure interaction, as applied to flexible structures, has wide application in diverse areas such as flutter in aircraft, wind response of buildings, flows in elastic pipes and blood vessels. It involves the coupling of fluid flow and structural mechanics, two fields that are conventionally modelled using two dissimilar methods, thus a single comprehensive computational model of both phenomena is a considerable challenge. Until recently work in this area focused on one phenomenon and represented the behaviour of the other more simply. More recently, strategies for solving the full coupling between the fluid and solid mechanics behaviour have been developed. A key contribution has been made by Farhat et al. [Int. J. Numer. Meth. Fluids 21 (1995) 807] employing FV-UM methods for solving the Euler flow equations and a conventional finite element method for the elastic solid mechanics and the spring based mesh procedure of Batina [AIAA paper 0115, 1989] for mesh movement. In this paper, we describe an approach which broadly exploits the three field strategy described by Farhat for fluid flow, structural dynamics and mesh movement but, in the context of DFSI, contains a number of novel features: a single mesh covering the entire domain, a Navier–Stokes flow, a single FV-UM discretisation approach for both the flow and solid mechanics procedures, an implicit predictor–corrector version of the Newmark algorithm, a single code embedding the whole strategy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Introdução: O judô é um esporte que implica uma grande variedade de gestos, ações e aptidões físicas, entre as quais, capacidade de controlo postural, equilíbrio, flexibilidade e força. Quando observada as áreas mais afetas na pratica do judô a região do joelho é das que possui maior incidência. O objetivo deste estudo foi avaliar os efeitos da aplicação do Dynamic Tape (DT), um tape biomecânico, na funcionalidade do quadriceps de atletas de judô masculino com dor não específica no joelho em termos de equilíbrio, força, flexibilidade e dor. Metodologia: A amostra foi constituída por 37 indivíduos, tendo os participantes sido submetidos a testes, primeiramente sem Dynamic Tape (SDT) e posteriormente com Dynamic Tape (CDT). Os testes aplicados foram o Standing Stork Test (SST), o Y Balance Test (YBT), o Four Square Step Test (FSST),o Single Leg Hop Test (SLHT), e o Teste de flexão do membro inferior (TFMI) e o Teste de extensão do membros (TEMI) e a escala numérica de dor (END) no final de todos os testes. Resultados: Não foram observadas diferenças significativas para o teste SST (p=0,6794), porém os teste YBT, SLHT, TFMI, TEMI e END (p<0,0001), assim como FSST (p=0,0026) entre os momentos CDT e SDT demonstraram diferenças estatísticamente significativas, produzindo a aplicação do DT efeitos positivos. Na performance do atleta. Conclusão: A aplicação do DT não foi capaz de melhorar de forma significativa o equilíbrio estático, no então demonstrou influenciar o equilíbrio semi-dinâmico, dinâmico, a flexibilidade e a dor.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The study of forest re activity, in its several aspects, is essencial to understand the phenomenon and to prevent environmental public catastrophes. In this context the analysis of monthly number of res along several years is one aspect to have into account in order to better comprehend this tematic. The goal of this work is to analyze the monthly number of forest res in the neighboring districts of Aveiro and Coimbra, Portugal, through dynamic factor models for bivariate count series. We use a bayesian approach, through MCMC methods, to estimate the model parameters as well as to estimate the common latent factor to both series.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the increasing complexity of today's software, the software development process is becoming highly time and resource consuming. The increasing number of software configurations, input parameters, usage scenarios, supporting platforms, external dependencies, and versions plays an important role in expanding the costs of maintaining and repairing unforeseeable software faults. To repair software faults, developers spend considerable time in identifying the scenarios leading to those faults and root-causing the problems. While software debugging remains largely manual, it is not the case with software testing and verification. The goal of this research is to improve the software development process in general, and software debugging process in particular, by devising techniques and methods for automated software debugging, which leverage the advances in automatic test case generation and replay. In this research, novel algorithms are devised to discover faulty execution paths in programs by utilizing already existing software test cases, which can be either automatically or manually generated. The execution traces, or alternatively, the sequence covers of the failing test cases are extracted. Afterwards, commonalities between these test case sequence covers are extracted, processed, analyzed, and then presented to the developers in the form of subsequences that may be causing the fault. The hypothesis is that code sequences that are shared between a number of faulty test cases for the same reason resemble the faulty execution path, and hence, the search space for the faulty execution path can be narrowed down by using a large number of test cases. To achieve this goal, an efficient algorithm is implemented for finding common subsequences among a set of code sequence covers. Optimization techniques are devised to generate shorter and more logical sequence covers, and to select subsequences with high likelihood of containing the root cause among the set of all possible common subsequences. A hybrid static/dynamic analysis approach is designed to trace back the common subsequences from the end to the root cause. A debugging tool is created to enable developers to use the approach, and integrate it with an existing Integrated Development Environment. The tool is also integrated with the environment's program editors so that developers can benefit from both the tool suggestions, and their source code counterparts. Finally, a comparison between the developed approach and the state-of-the-art techniques shows that developers need only to inspect a small number of lines in order to find the root cause of the fault. Furthermore, experimental evaluation shows that the algorithm optimizations lead to better results in terms of both the algorithm running time and the output subsequence length.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this work, we introduce a new class of numerical schemes for rarefied gas dynamic problems described by collisional kinetic equations. The idea consists in reformulating the problem using a micro-macro decomposition and successively in solving the microscopic part by using asymptotic preserving Monte Carlo methods. We consider two types of decompositions, the first leading to the Euler system of gas dynamics while the second to the Navier-Stokes equations for the macroscopic part. In addition, the particle method which solves the microscopic part is designed in such a way that the global scheme becomes computationally less expensive as the solution approaches the equilibrium state as opposite to standard methods for kinetic equations which computational cost increases with the number of interactions. At the same time, the statistical error due to the particle part of the solution decreases as the system approach the equilibrium state. This causes the method to degenerate to the sole solution of the macroscopic hydrodynamic equations (Euler or Navier-Stokes) in the limit of infinite number of collisions. In a last part, we will show the behaviors of this new approach in comparisons to standard Monte Carlo techniques for solving the kinetic equation by testing it on different problems which typically arise in rarefied gas dynamic simulations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this thesis, we propose several advances in the numerical and computational algorithms that are used to determine tomographic estimates of physical parameters in the solar corona. We focus on methods for both global dynamic estimation of the coronal electron density and estimation of local transient phenomena, such as coronal mass ejections, from empirical observations acquired by instruments onboard the STEREO spacecraft. We present a first look at tomographic reconstructions of the solar corona from multiple points-of-view, which motivates the developments in this thesis. In particular, we propose a method for linear equality constrained state estimation that leads toward more physical global dynamic solar tomography estimates. We also present a formulation of the local static estimation problem, i.e., the tomographic estimation of local events and structures like coronal mass ejections, that couples the tomographic imaging problem to a phase field based level set method. This formulation will render feasible the 3D tomography of coronal mass ejections from limited observations. Finally, we develop a scalable algorithm for ray tracing dense meshes, which allows efficient computation of many of the tomographic projection matrices needed for the applications in this thesis.