926 resultados para C30 - General-Sectional Models
Resumo:
In general the motion of a body takes place in a confined environment and collision of the body with the containing wall is possible. In order to predict the dynamics of a body in this condition one must know what happens in a collision. Therefore, the problem is: if one knows the pre-collision dynamics of the body and the properties of the body and the wall one wants to predict the post-collision dynamics. This problem is quite old and it appeared in the literature in 1668. Up to 1984 it seemed that Newton's model was enough to solve the problem. But it was found that this was not the case and a renewed interest in the problem appeared. The aim of this paper is to treat the problem of plan collisions of rigid bodies, to classify the different models found in the literature and to present a new model that is a generalization of most of these models.
Resumo:
Panel at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014
Resumo:
Presentation at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014
Resumo:
This thesis is concerned with the state and parameter estimation in state space models. The estimation of states and parameters is an important task when mathematical modeling is applied to many different application areas such as the global positioning systems, target tracking, navigation, brain imaging, spread of infectious diseases, biological processes, telecommunications, audio signal processing, stochastic optimal control, machine learning, and physical systems. In Bayesian settings, the estimation of states or parameters amounts to computation of the posterior probability density function. Except for a very restricted number of models, it is impossible to compute this density function in a closed form. Hence, we need approximation methods. A state estimation problem involves estimating the states (latent variables) that are not directly observed in the output of the system. In this thesis, we use the Kalman filter, extended Kalman filter, Gauss–Hermite filters, and particle filters to estimate the states based on available measurements. Among these filters, particle filters are numerical methods for approximating the filtering distributions of non-linear non-Gaussian state space models via Monte Carlo. The performance of a particle filter heavily depends on the chosen importance distribution. For instance, inappropriate choice of the importance distribution can lead to the failure of convergence of the particle filter algorithm. In this thesis, we analyze the theoretical Lᵖ particle filter convergence with general importance distributions, where p ≥2 is an integer. A parameter estimation problem is considered with inferring the model parameters from measurements. For high-dimensional complex models, estimation of parameters can be done by Markov chain Monte Carlo (MCMC) methods. In its operation, the MCMC method requires the unnormalized posterior distribution of the parameters and a proposal distribution. In this thesis, we show how the posterior density function of the parameters of a state space model can be computed by filtering based methods, where the states are integrated out. This type of computation is then applied to estimate parameters of stochastic differential equations. Furthermore, we compute the partial derivatives of the log-posterior density function and use the hybrid Monte Carlo and scaled conjugate gradient methods to infer the parameters of stochastic differential equations. The computational efficiency of MCMC methods is highly depend on the chosen proposal distribution. A commonly used proposal distribution is Gaussian. In this kind of proposal, the covariance matrix must be well tuned. To tune it, adaptive MCMC methods can be used. In this thesis, we propose a new way of updating the covariance matrix using the variational Bayesian adaptive Kalman filter algorithm.
Resumo:
The cosmological standard view is based on the assumptions of homogeneity, isotropy and general relativistic gravitational interaction. These alone are not sufficient for describing the current cosmological observations of accelerated expansion of space. Although general relativity is extremely accurately tested to describe the local gravitational phenomena, there is a strong demand for modifying either the energy content of the universe or the gravitational interaction itself to account for the accelerated expansion. By adding a non-luminous matter component and a constant energy component with negative pressure, the observations can be explained with general relativity. Gravitation, cosmological models and their observational phenomenology are discussed in this thesis. Several classes of dark energy models that are motivated by theories outside the standard formulation of physics were studied with emphasis on the observational interpretation. All the cosmological models that seek to explain the cosmological observations, must also conform to the local phenomena. This poses stringent conditions for the physically viable cosmological models. Predictions from a supergravity quintessence model was compared to Supernova 1a data and several metric gravity models were studied with local experimental results. Polytropic stellar configurations of solar, white dwarf and neutron stars were numerically studied with modified gravity models. The main interest was to study the spacetime around the stars. The results shed light on the viability of the studied cosmological models.
Resumo:
Time series analysis can be categorized into three different approaches: classical, Box-Jenkins, and State space. Classical approach makes a basement for the analysis and Box-Jenkins approach is an improvement of the classical approach and deals with stationary time series. State space approach allows time variant factors and covers up a broader area of time series analysis. This thesis focuses on parameter identifiablity of different parameter estimation methods such as LSQ, Yule-Walker, MLE which are used in the above time series analysis approaches. Also the Kalman filter method and smoothing techniques are integrated with the state space approach and MLE method to estimate parameters allowing them to change over time. Parameter estimation is carried out by repeating estimation and integrating with MCMC and inspect how well different estimation methods can identify the optimal model parameters. Identification is performed in probabilistic and general senses and compare the results in order to study and represent identifiability more informative way.
Resumo:
A new area of machine learning research called deep learning, has moved machine learning closer to one of its original goals: artificial intelligence and general learning algorithm. The key idea is to pretrain models in completely unsupervised way and finally they can be fine-tuned for the task at hand using supervised learning. In this thesis, a general introduction to deep learning models and algorithms are given and these methods are applied to facial keypoints detection. The task is to predict the positions of 15 keypoints on grayscale face images. Each predicted keypoint is specified by an (x,y) real-valued pair in the space of pixel indices. In experiments, we pretrained deep belief networks (DBN) and finally performed a discriminative fine-tuning. We varied the depth and size of an architecture. We tested both deterministic and sampled hidden activations and the effect of additional unlabeled data on pretraining. The experimental results show that our model provides better results than publicly available benchmarks for the dataset.
Resumo:
In worldwide studies, interleukin-6 (IL-6) is implicated in age-related disturbances. The aim of the present report was to determine the possible association of IL-6 -174 C/G promoter polymorphism with the cytokine profile as well as with the presence of selected cardiovascular risk features. This was a cross-sectional study on Brazilian women aged 60 years or older. A sample of 193 subjects was investigated for impaired glucose regulation, diabetes, hypertension, and dyslipidemia. Genotyping was done by direct sequencing of PCR products. IL-6 and C-reactive protein were quantified by high-sensitivity assays. General linear regression models or the Student t-test were used to compare continuous variables among genotypes, followed by adjustments for confounding variables. The chi-square test was used to compare categorical variables. The genotypes were consistent with Hardy-Weinberg equilibrium proportions. In a recessive model, mean waist-to-hip ratio, serum glycated hemoglobin and serum glucose were markedly lower in C homozygotes (P = 0.001, 0.028, and 0.047, respectively). In a dominant hypothesis, G homozygotes displayed a trend towards higher levels of circulating IL-6 (P = 0.092). Non-parametric analysis revealed that impaired fasting glucose and hypertension were findings approximately 2-fold more frequent among G homozygous subjects (P = 0.042 and 0.043, respectively). Taken together, our results show that the IL-6 -174 G-allele is implicated in a greater cardiovascular risk. To our knowledge, this is the first investigation of IL-6 promoter variants and age-related disturbances in the Brazilian elderly population.
Resumo:
Our objective was to examine associations of adult weight gain and nonalcoholic fatty liver disease (NAFLD). Cross-sectional interview data from 844 residents in Wan Song Community from October 2009 to April 2010 were analyzed in multivariate logistic regression models to examine odds ratios (OR) and 95% confidence intervals (CI) between NAFLD and weight change from age 20. Questionnaires, physical examinations, laboratory examinations, and ultrasonographic examination of the liver were carried out. Maximum rate of weight gain, body mass index, waist circumference, waist-to-hip ratio, systolic blood pressure, diastolic blood pressure, fasting blood glucose, cholesterol, triglycerides, uric acid, and alanine transaminase were higher in the NAFLD group than in the control group. HDL-C in the NAFLD group was lower than in the control group. As weight gain increased (measured as the difference between current weight and weight at age 20 years), the OR of NAFLD increased in multivariate models. NAFLD OR rose with increasing weight gain as follows: OR (95%CI) for NAFLD associated with weight gain of 20+ kg compared to stable weight (change <5 kg) was 4.23 (2.49-7.09). Significantly increased NAFLD OR were observed even for weight gains of 5-9.9 kg. For the “age 20 to highest lifetime weight” metric, the OR of NAFLD also increased as weight gain increased. For the “age 20 to highest lifetime weight” metric and the “age 20 to current weight” metric, insulin resistance index (HOMA-IR) increased as weight gain increased (P<0.001). In a stepwise multivariate regression analysis, significant association was observed between adult weight gain and NAFLD (OR=1.027, 95%CI=1.002-1.055, P=0.025). We conclude that adult weight gain is strongly associated with NAFLD.
Resumo:
Many economists show certain nonconformity relative to the excessive mathematical formalization of economics. This stems from dissatisfaction with the old debate about the lack of correspondence between mainstream theoretical models and reality. Although we do not propose to settle this debate here, this article seeks to associate the mismatch of mathematized models with the reality of the adoption of the hypothetical-deductive method as reproduced by general equilibrium. We begin by defining the main benefits of the mathematization of economics. Secondly, we address traditional criticism leveled against it. We then focus on more recent criticism from Gillies (2005) and Bresser-Pereira (2008). Finally, we attempt to associate the reproduction of the hypothetical-deductive method with a metatheoretical process triggered by Debreu's general equilibrium theory. In this respect, we appropriate the ideas of Weintraub (2002), Punzo (1991), and mainly Woo (1986) to support our hypothesis.
Resumo:
Although alcohol problems and alcohol consumption are related, consumption does not fully account for differences in vulnerability to alcohol problems. Therefore, other factors should account for these differences. Based on previous research, it was hypothesized that risky drinking behaviours, illicit and prescription drug use, affect and sex differences would account for differences in vulnerability to alcohol problems while statistically controlling for overall alcohol consumption. Four models were developed that were intended to test the predictive ability of these factors, three of which tested the predictor sets separately and a fourth which tested them in a combined model. In addition, two distinct criterion variables were regressed on the predictors. One was a measure of the frequency that participants experienced negative consequences that they attributed to their drinking and the other was a measure of the extent to which participants perceived themselves to be problem drinkers. Each of the models was tested on four samples from different populations, including fIrst year university students, university students in their graduating year, a clinical sample of people in treatment for addiction, and a community sample of young adults randomly selected from the general population. Overall, support was found for each of the models and each of the predictors in accounting for differences in vulnerability to alcohol problems. In particular, the frequency with which people become intoxicated, frequency of illicit drug use and high levels of negative affect were strong and consistent predictors of vulnerability to alcohol problems across samples and criterion variables. With the exception of the clinical sample, the combined models predicted vulnerability to negative consequences better than vulnerability to problem drinker status. Among the clinical and community samples the combined model predicted problem drinker status better than in the student samples.
Resumo:
The relevance of attentional measures to cognitive and social adaptive behaviour was examined in an adolescent sample. Unlike previous research, the influence of both inhibitory and facilitory aspects of attention were studied. In addition, contributions made by these attentional processes were compared with traditional psychometric measures of cognitive functioning. Data were gathered from 36 grade 10 and 1 1 high school students (20 male and 16 female students) with a variety of learning and attentional difficulties. Data collection was conducted in the course of two testing sessions. In the first session, students completed questionnaires regarding their medical history, and everyday behaviours (the Brock Adaptive Functioning Questionnaire), along with non-verbal problem solving tasks and motor speed tasks. In the second session, students performed working memory measures and computer-administered tasks assessing inhibitory and facilitory aspects of attention. Grades and teacher-rated measures of cognitive and social impulsivity were also gathered. Results indicate that attentional control has both cognitive and social/emotional implications. Performance on negative priming and facilitation trials from the Flanker task predicted grades in core courses, social functioning measures, and cognitive and social impulsivity ratings. However, beneficial effects for academic and social functioning associated with inhibition were less prevalent in those demonstrating a greater ability to respond to facilitory cues. There was also some evidence that high levels of facilitation were less beneficial to academic performance, and female students were more likely to exceed optimal levels of facilitory processing. Furthermore, lower negative priming was ''S'K 'i\':y-: -'*' - r " j«v ; ''*.' iij^y Inhibition, Facilitation and Social Competence 3 associated with classroom-rated distraction and hyperactivity, but the relationship between inhibition and social aspects of impulsivity was stronger for adolescents with learning or reading problems, and the relationship between inhibition and cognitive impulsivity was stronger for male students. In most cases, attentional measures were predictive of performance outcomes independent of traditional psychometric measures of cognitive functioning. >,, These findings provide support for neuropsychological models linking inhibition to control of interference and arousal, and emphasize the fundamental role of attention in everyday adolescent activities. The findings also warrant further investigation into the ways which inhibitory and facilitory attentional processes interact, and the contextdependent nature of attentional control.associated with classroom-rated distraction and hyperactivity, but the relationship between inhibition and social aspects of impulsivity was stronger for adolescents with learning or reading problems, and the relationship between inhibition and cognitive impulsivity was stronger for male students. In most cases, attentional measures were predictive of performance outcomes independent of traditional psychometric measures of cognitive functioning. >,, These findings provide support for neuropsychological models linking inhibition to control of interference and arousal, and emphasize the fundamental role of attention in everyday adolescent activities. The findings also warrant further investigation into the ways which inhibitory and facilitory attentional processes interact, and the contextdependent nature of attentional control.
Resumo:
The purpose of this thesis is to examine various policy implementation models, and to determine what use they are to a government. In order to insure that governmental proposals are created and exercised in an effective manner, there roust be some guidelines in place which will assist in resolving difficult situations. All governments face the challenge of responding to public demand, by delivering the type of policy responses that will attempt to answer those demands. The problem for those people in positions of policy-making responsibility is to balance the competitive forces that would influence policy. This thesis examines provincial government policy in two unique cases. The first is the revolutionary recommendations brought forth in the Hall -Dennis Report. The second is the question of extending full -funding to the end of high school in the separate school system. These two cases illustrate how divergent and problematic the policy-making duties of any government may be. In order to respond to these political challenges decision-makers must have a clear understanding of what they are attempting to do. They must also have an assortment of policy-making models that will insure a policy response effectively deals with the issue under examination. A government must make every effort to insure that all policymaking methods are considered, and that the data gathered is inserted into the most appropriate model. Currently, there is considerable debate over the benefits of the progressive individualistic education approach as proposed by the Hall -Dennis Committee. This debate is usually intensified during periods of economic uncertainty. Periodically, the province will also experience brief yet equally intense debate on the question of separate school funding. At one level, this debate centres around the efficiency of maintaining two parallel education systems, but the debate frequently has undertones of the religious animosity common in Ontario's history. As a result of the two policy cases under study we may ask ourselves these questions: a) did the policies in question improve the general quality of life in the province? and b) did the policies unite the province? In the cases of educational instruction and finance the debate is ongoing and unsettling. Currently, there is a widespread belief that provincial students at the elementary and secondary levels of education are not being educated adequately to meet the challenges of the twenty-first century. The perceived culprit is individual education which sees students progressing through the system at their own pace and not meeting adequate education standards. The question of the finance of Catholic education occasionally rears its head in a painful fashion within the province. Some public school supporters tend to take extension as a personal religious defeat, rather than an opportunity to demonstrate that educational diversity can be accommodated within Canada's most populated province. This thesis is an attempt to analyze how successful provincial policy-implementation models were in answering public demand. A majority of the public did not demand additional separate school funding, yet it was put into place. The same majority did insist on an examination of educational methods, and the government did put changes in place. It will also demonstrate how policy if wisely created may spread additional benefits to the public at large. Catholic students currently enjoy a much improved financial contribution from the province, yet these additional funds were taken from somewhere. The public system had it funds reduced with what would appear to be minimal impact. This impact indicates that government policy is still sensitive to the strongly held convictions of those people in opposition to a given policy.
Hydraulic and fluvial geomorphological models for a bedrock channel reach of the Twenty Mile Creek /
Resumo:
Bedrock channels have been considered challenging geomorphic settings for the application of numerical models. Bedrock fluvial systems exhibit boundaries that are typically less mobile than alluvial systems, yet they are still dynamic systems with a high degree of spatial and temporal variability. To understand the variability of fluvial systems, numerical models have been developed to quantify flow magnitudes and patterns as the driving force for geomorphic change. Two types of numerical model were assessed for their efficacy in examining the bedrock channel system consisting of a high gradient portion of the Twenty Mile Creek in the Niagara Region of Ontario, Canada. A one-dimensional (1-D) flow model that utilizes energy equations, HEC RAS, was used to determine velocity distributions through the study reach for the mean annual flood (MAF), the 100-year return flood and the 1,000-year return flood. A two-dimensional (2-D) flow model that makes use of Navier-Stokes equations, RMA2, was created with the same objectives. The 2-D modeling effort was not successful due to the spatial complexity of the system (high slope and high variance). The successful 1 -D model runs were further extended using very high resolution geospatial interpolations inherent to the HEC RAS extension, HEC geoRAS. The modeled velocity data then formed the basis for the creation of a geomorphological analysis that focused upon large particles (boulders) and the forces needed to mobilize them. Several existing boulders were examined by collecting detailed measurements to derive three-dimensional physical models for the application of fluid and solid mechanics to predict movement in the study reach. An imaginary unit cuboid (1 metre by 1 metre by 1 metre) boulder was also envisioned to determine the general propensity for the movement of such a boulder through the bedrock system. The efforts and findings of this study provide a standardized means for the assessment of large particle movement in a bedrock fluvial system. Further efforts may expand upon this standardization by modeling differing boulder configurations (platy boulders, etc.) at a high level of resolution.
Resumo:
Little research has been done on inclusive education in the context of the Jewish day school general studies classroom. This qualitative case study research examines the inclusive teaching experiences of 2 general studies teachers in their respective grade 4 classrooms in 2 traditionally structured dual curriculum Jewish day schools. Data analysis of qualitative open-ended interviews, classroom observations, postobservation discxissions, and school and formal curriculum documents yielded understandings about the participants' inclusive practice and the challenges of the traditional Jewish day school structure. Eight themes that emerged related to understandings and questions about time limitations, an emphasis on efficiency, the day school structure, inclusion models, the need for increased teacher collaboration, and tension between curriculum-as-plan and curriculum-as-lived. Discussion of the findings suggests the need for further research in inclusion and integrated curriculimi in order to better understand possible restructuring of the traditional Jewish day school fi-om the time efficiency constrained dual curriculiun structure to a more flexible structure conducive of a meaningful and dynamic lived curriculum.