876 resultados para Modeling Rapport Using Hidden Markov Models
Resumo:
Anxiety sensitivity is a multifaceted cognitive risk factor currently being examined in relation to anxiety and depression. The paucity of research on the relative contribution of the facets of anxiety sensitivity to anxiety and depression, coupled with variations in existing findings, indicate that the relations remain inadequately understood. In the present study, the relations between the facets of anxiety sensitivity, anxiety, and depression were examined in 730 Hispanic-Latino and European-American youth referred to an anxiety specialty clinic. Youth completed the Childhood Anxiety Sensitivity Index, the Revised Children’s Manifest Anxiety Scale, and the Children’s Depression Inventory. The factor structure of the Childhood Anxiety Sensitivity Index was examined using ordered-categorical confirmatory factor analytic techniques. Goodness-of-fit criteria indicated that a two-factor model fit the data best. The identified facets of anxiety sensitivity included Physical/Mental Concerns and Social Concerns. Support was also found for cross-ethnic equivalence of the two-factor model across Hispanic-Latino and European-American youth. Structural equation modeling was used to examine models involving anxiety sensitivity, anxiety, and depression. Results indicated that an overall measure of anxiety sensitivity was positively associated with both anxiety and depression, while the facets of anxiety sensitivity showed differential relations to anxiety and depression symptoms. Both facets of anxiety sensitivity were related to overall anxiety and its symptom dimensions, with the exception being that Social Concerns was not related to physiological anxiety symptoms. Physical/Mental Concerns were strongly associated with overall depression and with all depression symptom dimensions. Social Concerns was not significantly associated with depression or its symptom dimensions. These findings highlight that anxiety sensitivity’s relations to youth psychiatric symptoms are complex. Results suggest that focusing on anxiety sensitivity’s facets is important to fully understand its role in psychopathology. Clinicians may want to target all facets of anxiety sensitivity when treating anxious youth. However, in the context of depression, it might be sufficient for clinicians to target Physical/Mental Incapacitation Concerns.
Resumo:
This dissertation studies newly founded U.S. firms' survival using three different releases of the Kauffman Firm Survey. I study firms' survival from a different perspective in each chapter. ^ The first essay studies firms' survival through an analysis of their initial state at startup and the current state of the firms as they gain maturity. The probability of survival is determined using three probit models, using both firm-specific variables and an industry scale variable to control for the environment of operation. The firm's specific variables include size, experience and leverage as a debt-to-value ratio. The results indicate that size and relevant experience are both positive predictors for the initial and current states. Debt appears to be a predictor of exit if not justified wisely by acquiring assets. As suggested previously in the literature, entering a smaller-scale industry is a positive predictor of survival from birth. Finally, a smaller-scale industry diminishes the negative effects of debt. ^ The second essay makes use of a hazard model to confirm that new service-providing (SP) firms are more likely to survive than new product providers (PPs). I investigate the possible explanations for the higher survival rate of SPs using a Cox proportional hazard model. I examine six hypotheses (variations in capital per worker, expenses per worker, owners' experience, industry wages, assets and size), none of which appear to explain why SPs are more likely than PPs to survive. Two other possibilities are discussed: tax evasion and human/social relations, but these could not be tested due to lack of data. ^ The third essay investigates women-owned firms' higher failure rates using a Cox proportional hazard on two models. I make use of a never-before used variable that proxies for owners' confidence. This variable represents the owners' self-evaluated competitive advantage. ^ The first empirical model allows me to compare women's and men's hazard rates for each variable. In the second model I successively add the variables that could potentially explain why women have a higher failure rate. Unfortunately, I am not able to fully explain the gender effect on the firms' survival. Nonetheless, the second empirical approach allows me to confirm that social and psychological differences among genders are important in explaining the higher likelihood to fail in women-owned firms.^
Resumo:
We present a case study on how participation of one student changed during her first semester of introductory physics class using Modeling Instruction. Using video recordings, we explore how her behavior is consistent with a change from thinking of group learning as a parallel activity to one that is collaborative.
Resumo:
Low-rise buildings are often subjected to high wind loads during hurricanes that lead to severe damage and cause water intrusion. It is therefore important to estimate accurate wind pressures for design purposes to reduce losses. Wind loads on low-rise buildings can differ significantly depending upon the laboratory in which they were measured. The differences are due in large part to inadequate simulations of the low-frequency content of atmospheric velocity fluctuations in the laboratory and to the small scale of the models used for the measurements. A new partial turbulence simulation methodology was developed for simulating the effect of low-frequency flow fluctuations on low-rise buildings more effectively from the point of view of testing accuracy and repeatability than is currently the case. The methodology was validated by comparing aerodynamic pressure data for building models obtained in the open-jet 12-Fan Wall of Wind (WOW) facility against their counterparts in a boundary-layer wind tunnel. Field measurements of pressures on Texas Tech University building and Silsoe building were also used for validation purposes. The tests in partial simulation are freed of integral length scale constraints, meaning that model length scales in such testing are only limited by blockage considerations. Thus the partial simulation methodology can be used to produce aerodynamic data for low-rise buildings by using large-scale models in wind tunnels and WOW-like facilities. This is a major advantage, because large-scale models allow for accurate modeling of architectural details, testing at higher Reynolds number, using greater spatial resolution of the pressure taps in high pressure zones, and assessing the performance of aerodynamic devices to reduce wind effects. The technique eliminates a major cause of discrepancies among measurements conducted in different laboratories and can help to standardize flow simulations for testing residential homes as well as significantly improving testing accuracy and repeatability. Partial turbulence simulation was used in the WOW to determine the performance of discontinuous perforated parapets in mitigating roof pressures. The comparisons of pressures with and without parapets showed significant reductions in pressure coefficients in the zones with high suctions. This demonstrated the potential of such aerodynamic add-on devices to reduce uplift forces.
Resumo:
The increasing use of model-driven software development has renewed emphasis on using domain-specific models during application development. More specifically, there has been emphasis on using domain-specific modeling languages (DSMLs) to capture user-specified requirements when creating applications. The current approach to realizing these applications is to translate DSML models into source code using several model-to-model and model-to-code transformations. This approach is still dependent on the underlying source code representation and only raises the level of abstraction during development. Experience has shown that developers will many times be required to manually modify the generated source code, which can be error-prone and time consuming. ^ An alternative to the aforementioned approach involves using an interpreted domain-specific modeling language (i-DSML) whose models can be directly executed using a Domain Specific Virtual Machine (DSVM). Direct execution of i-DSML models require a semantically rich platform that reduces the gap between the application models and the underlying services required to realize the application. One layer in this platform is the domain-specific middleware that is responsible for the management and delivery of services in the specific domain. ^ In this dissertation, we investigated the problem of designing the domain-specific middleware of the DSVM to facilitate the bifurcation of the semantics of the domain and the model of execution (MoE) while supporting runtime adaptation and validation. We approached our investigation by seeking solutions to the following sub-problems: (1) How can the domain-specific knowledge (DSK) semantics be separated from the MoE for a given domain? (2) How do we define a generic model of execution (GMoE) of the middleware so that it is adaptable and realizes DSK operations to support delivery of services? (3) How do we validate the realization of DSK operations at runtime? ^ Our research into the domain-specific middleware was done using an i-DSML for the user-centric communication domain, Communication Modeling Language (CML), and for microgrid energy management domain, Microgrid Modeling Language (MGridML). We have successfully developed a methodology to separate the DSK and GMoE of the middleware of a DSVM that supports specialization for a given domain, and is able to perform adaptation and validation at runtime. ^
Resumo:
Problems associated to longitudinal interactions in buried pipelines are characterized as three-dimensional and can lead to different soil-pipe issues. Despite the progress achieved in research on buried pipelines, little attention has been given to the three-dimensional nature of the problem throughout the last decades. Most of researches simplify the problem by considering it in plane strain condition. This dissertation aims to present a study on the behavior of buried pipelines under local settlement or elevation, using three-dimensional simulations. Finite element code Plaxis 3D was used for the simulations. Particular aspects of the numerical modeling were evaluated and parametric analyzes were performed, was investigated the effects of soil arching in three-dimensional form. The main variables investigated were as follows: relative density, displacement of the elevation or settlement zone, elevated zone size, height of soil cover and pipe diameter/thickness ratio. The simulations were performed in two stages. The first stage was involved the validation of the numerical analysis using the physical models put forward by Costa (2005). In the second stage, numerical analyzes of a full-scale pipeline subjected to a localized elevation were performed. The obtained results allowed a detailed evaluation of the redistribution of stresses in the soil mass and the deflections along the pipe. It was observed the reduction of stresses in the soil mass and pipe deflections when the height of soil cover was decreased on regions of the pipe subjected to elevation. It was also shown for the analyzed situation that longitudinal thrusts were higher than vi circumferential trusts and exceeded the allowable stresses and deflections. Furthermore, the benefits of minimizing stress with technical as the false trench, compressible cradle and a combination of both applied to the simulated pipeline were verified
Resumo:
Multi-output Gaussian processes provide a convenient framework for multi-task problems. An illustrative and motivating example of a multi-task problem is multi-region electrophysiological time-series data, where experimentalists are interested in both power and phase coherence between channels. Recently, the spectral mixture (SM) kernel was proposed to model the spectral density of a single task in a Gaussian process framework. This work develops a novel covariance kernel for multiple outputs, called the cross-spectral mixture (CSM) kernel. This new, flexible kernel represents both the power and phase relationship between multiple observation channels. The expressive capabilities of the CSM kernel are demonstrated through implementation of 1) a Bayesian hidden Markov model, where the emission distribution is a multi-output Gaussian process with a CSM covariance kernel, and 2) a Gaussian process factor analysis model, where factor scores represent the utilization of cross-spectral neural circuits. Results are presented for measured multi-region electrophysiological data.
Resumo:
The study of gene × environment, as well as epistatic interactions in schizophrenia, has provided important insight into the complex etiopathologic basis of schizophrenia. It has also increased our understanding of the role of susceptibility genes in the disorder and is an important consideration as we seek to translate genetic advances into novel antipsychotic treatment targets. This review summarises data arising from research involving the modelling of gene × environment interactions in schizophrenia using preclinical genetic models. Evidence for synergistic effects on the expression of schizophrenia-relevant endophenotypes will be discussed. It is proposed that valid and multifactorial preclinical models are important tools for identifying critical areas, as well as underlying mechanisms, of convergence of genetic and environmental risk factors, and their interaction in schizophrenia.
Resumo:
This paper presents the novel theory for performing multi-agent activity recognition without requiring large training corpora. The reduced need for data means that robust probabilistic recognition can be performed within domains where annotated datasets are traditionally unavailable. Complex human activities are composed from sequences of underlying primitive activities. We do not assume that the exact temporal ordering of primitives is necessary, so can represent complex activity using an unordered bag. Our three-tier architecture comprises low-level video tracking, event analysis and high-level inference. High-level inference is performed using a new, cascading extension of the Rao–Blackwellised Particle Filter. Simulated annealing is used to identify pairs of agents involved in multi-agent activity. We validate our framework using the benchmarked PETS 2006 video surveillance dataset and our own sequences, and achieve a mean recognition F-Score of 0.82. Our approach achieves a mean improvement of 17% over a Hidden Markov Model baseline.
Resumo:
Hybrid simulation is a technique that combines experimental and numerical testing and has been used for the last decades in the fields of aerospace, civil and mechanical engineering. During this time, most of the research has focused on developing algorithms and the necessary technology, including but not limited to, error minimisation techniques, phase lag compensation and faster hydraulic cylinders. However, one of the main shortcomings in hybrid simulation that has pre- vented its widespread use is the size of the numerical models and the effect that higher frequencies may have on the stability and accuracy of the simulation. The first chapter in this document provides an overview of the hybrid simulation method and the different hybrid simulation schemes, and the corresponding time integration algorithms, that are more commonly used in this field. The scope of this thesis is presented in more detail in chapter 2: a substructure algorithm, the Substep Force Feedback (Subfeed), is adapted in order to fulfil the necessary requirements in terms of speed. The effects of more complex models on the Subfeed are also studied in detail, and the improvements made are validated experimentally. Chapters 3 and 4 detail the methodologies that have been used in order to accomplish the objectives mentioned in the previous lines, listing the different cases of study and detailing the hardware and software used to experimentally validate them. The third chapter contains a brief introduction to a project, the DFG Subshake, whose data have been used as a starting point for the developments that are shown later in this thesis. The results obtained are presented in chapters 5 and 6, with the first of them focusing on purely numerical simulations while the second of them is more oriented towards a more practical application including experimental real-time hybrid simulation tests with large numerical models. Following the discussion of the developments in this thesis is a list of hardware and software requirements that have to be met in order to apply the methods described in this document, and they can be found in chapter 7. The last chapter, chapter 8, of this thesis focuses on conclusions and achievements extracted from the results, namely: the adaptation of the hybrid simulation algorithm Subfeed to be used in conjunction with large numerical models, the study of the effect of high frequencies on the substructure algorithm and experimental real-time hybrid simulation tests with vibrating subsystems using large numerical models and shake tables. A brief discussion of possible future research activities can be found in the concluding chapter.
Resumo:
Transient simulations are widely used in studying the past climate as they provide better comparison with any exisiting proxy data. However, multi-millennial transient simulations using coupled climate models are usually computationally very expensive. As a result several acceleration techniques are implemented when using numerical simulations to recreate past climate. In this study, we compare the results from transient simulations of the present and the last interglacial with and without acceleration of the orbital forcing, using the comprehensive coupled climate model CCSM3 (Community Climate System Model 3). Our study shows that in low-latitude regions, the simulation of long-term variations in interglacial surface climate is not significantly affected by the use of the acceleration technique (with an acceleration factor of 10) and hence, large-scale model-data comparison of surface variables is not hampered. However, in high-latitude regions where the surface climate has a direct connection to the deep ocean, e.g. in the Southern Ocean or the Nordic Seas, acceleration-induced biases in sea-surface temperature evolution may occur with potential influence on the dynamics of the overlying atmosphere. The data provided here are from both accelerated and non-accelerated runs as decadal mean values.
Resumo:
The study aims to identify the factors that influence the behavior intention to adopt an academic Information System (SIE), in an environment of mandatory use, applied in the procurement process at the Federal University of Pará (UFPA). For this, it was used a model of innovation adoption and technology acceptance (TAM), focused in attitudes and intentions regarding the behavior intention. The research was conducted a quantitative survey, through survey in a sample of 96 administrative staff of the researched institution. For data analysis, it was used structural equation modeling (SEM), using the partial least squares method (Partial Least Square PLS-PM). As to results, the constructs attitude and subjective norms were confirmed as strong predictors of behavioral intention in a pre-adoption stage. Despite the use of SIE is required, the perceived voluntariness also predicts the behavior intention. Regarding attitude, classical variables of TAM, like as ease of use and perceived usefulness, appear as the main influence of attitude towards the system. It is hoped that the results of this study may provide subsidies for more efficient management of the process of implementing systems and information technologies, particularly in public universities
Resumo:
Forecast is the basis for making strategic, tactical and operational business decisions. In financial economics, several techniques have been used to predict the behavior of assets over the past decades.Thus, there are several methods to assist in the task of time series forecasting, however, conventional modeling techniques such as statistical models and those based on theoretical mathematical models have produced unsatisfactory predictions, increasing the number of studies in more advanced methods of prediction. Among these, the Artificial Neural Networks (ANN) are a relatively new and promising method for predicting business that shows a technique that has caused much interest in the financial environment and has been used successfully in a wide variety of financial modeling systems applications, in many cases proving its superiority over the statistical models ARIMA-GARCH. In this context, this study aimed to examine whether the ANNs are a more appropriate method for predicting the behavior of Indices in Capital Markets than the traditional methods of time series analysis. For this purpose we developed an quantitative study, from financial economic indices, and developed two models of RNA-type feedfoward supervised learning, whose structures consisted of 20 data in the input layer, 90 neurons in one hidden layer and one given as the output layer (Ibovespa). These models used backpropagation, an input activation function based on the tangent sigmoid and a linear output function. Since the aim of analyzing the adherence of the Method of Artificial Neural Networks to carry out predictions of the Ibovespa, we chose to perform this analysis by comparing results between this and Time Series Predictive Model GARCH, developing a GARCH model (1.1).Once applied both methods (ANN and GARCH) we conducted the results' analysis by comparing the results of the forecast with the historical data and by studying the forecast errors by the MSE, RMSE, MAE, Standard Deviation, the Theil's U and forecasting encompassing tests. It was found that the models developed by means of ANNs had lower MSE, RMSE and MAE than the GARCH (1,1) model and Theil U test indicated that the three models have smaller errors than those of a naïve forecast. Although the ANN based on returns have lower precision indicator values than those of ANN based on prices, the forecast encompassing test rejected the hypothesis that this model is better than that, indicating that the ANN models have a similar level of accuracy . It was concluded that for the data series studied the ANN models show a more appropriate Ibovespa forecasting than the traditional models of time series, represented by the GARCH model
Resumo:
Nile perch (Lates niloticus), tilapia (Oreochromis spp), dagaa (Rastrineobola argentea, silver cyprinid), and haplochromines (Tribe Haplochromini) form the backbone of the commercial fishery on Lake Victoria. These fish stocks account for about 70% of the total catch in the three riparian states Uganda, Kenya, and Tanzania. The lake fisheries have been poorly managed, in part due to inadequate scientific analysis and management advice. The overall objective of this project was to model the stocks of the commercial fisheries of Lake Victoria with the view of determining reference points and current stock status. The Schaefer biomass model was fitted to available data for each stock (starting in the 1960s or later) in the form of landings, catch per unit effort, acoustic survey indices, and trawl survey indices. In most cases, the Schaefer model did not fit all data components very well, but attempts were made to find the best model for each stock. When the model was fitted to the Nile perch data starting from 1996, the estimated current biomass is 654 kt (95% CI 466–763); below the optimum of 692 kt and current harvest rate is 38% (33–73%), close to the optimum of 35%. At best, these can be used as tentative guidelines for the management of these fisheries. The results indicate that there have been strong multispecies interactions in the lake ecosystem. The findings from our study can be used as a baseline reference for future studies using more complex models, which could take these multispecies interactions into account.
Resumo:
The use of chemical control measures to reduce the impact of parasite and pest species has frequently resulted in the development of resistance. Thus, resistance management has become a key concern in human and veterinary medicine, and in agricultural production. Although it is known that factors such as gene flow between susceptible and resistant populations, drug type, application methods, and costs of resistance can affect the rate of resistance evolution, less is known about the impacts of density-dependent eco-evolutionary processes that could be altered by drug-induced mortality. The overall aim of this thesis was to take an experimental evolution approach to assess how life history traits respond to drug selection, using a free-living dioecious worm (Caenorhabditis remanei) as a model. In Chapter 2, I defined the relationship between C. remanei survival and Ivermectin dose over a range of concentrations, in order to control the intensity of selection used in the selection experiment described in Chapter 4. The dose-response data were also used to appraise curve-fitting methods, using Akaike Information Criterion (AIC) model selection to compare a series of nonlinear models. The type of model fitted to the dose response data had a significant effect on the estimates of LD50 and LD99, suggesting that failure to fit an appropriate model could give misleading estimates of resistance status. In addition, simulated data were used to establish that a potential cost of resistance could be predicted by comparing survival at the upper asymptote of dose-response curves for resistant and susceptible populations, even when differences were as low as 4%. This approach to dose-response modeling ensures that the maximum amount of useful information relating to resistance is gathered in one study. In Chapter 3, I asked how simulations could be used to inform important design choices used in selection experiments. Specifically, I focused on the effects of both within- and between-line variation on estimated power, when detecting small, medium and large effect sizes. Using mixed-effect models on simulated data, I demonstrated that commonly used designs with realistic levels of variation could be underpowered for substantial effect sizes. Thus, use of simulation-based power analysis provides an effective way to avoid under or overpowering a study designs incorporating variation due to random effects. In Chapter 4, I 3 investigated how Ivermectin dosage and changes in population density affect the rate of resistance evolution. I exposed replicate lines of C. remanei to two doses of Ivermectin (high and low) to assess relative survival of lines selected in drug-treated environments compared to untreated controls over 10 generations. Additionally, I maintained lines where mortality was imposed randomly to control for differences in density between drug treatments and to distinguish between the evolutionary consequences of drug treatment versus ecological processes affected by changes in density-dependent feedback. Intriguingly, both drug-selected and random-mortality lines showed an increase in survivorship when challenged with Ivermectin; the magnitude of this increase varied with the intensity of selection and life-history stage. The results suggest that interactions between density-dependent processes and life history may mediate evolved changes in susceptibility to control measures, which could result in misleading conclusions about the evolution of heritable resistance following drug treatment. In Chapter 5, I investigated whether the apparent changes in drug susceptibility found in Chapter 4 were related to evolved changes in life-history of C. remanei populations after selection in drug-treated and random-mortality environments. Rapid passage of lines in the drug-free environment had no effect on the measured life-history traits. In the drug-free environment, adult size and fecundity of drug-selected lines increased compared to the controls but drug selection did not affect lifespan. In the treated environment, drug-selected lines showed increased lifespan and fecundity relative to controls. Adult size of randomly culled lines responded in a similar way to drug-selected lines in the drug-free environment, but no change in fecundity or lifespan was observed in either environment. The results suggest that life histories of nematodes can respond to selection as a result of the application of control measures. Failure to take these responses into account when applying control measures could result in adverse outcomes, such as larger and more fecund parasites, as well as over-estimation of the development of genetically controlled resistance. In conclusion, my thesis shows that there may be a complex relationship between drug selection, density-dependent regulatory processes and life history of populations challenged with control measures. This relationship could have implications for how resistance is monitored and managed if life histories of parasitic species show such eco-evolutionary responses to drug application.