890 resultados para Context Model
Resumo:
An analytical model for Virtual Topology Reconfiguration (VTR) in optical networks is developed. It aims at the optical networks with a circuit-based data plane and an IPlike control plane. By identifying and analyzing the important factors impacting the network performance due to VTR operations on both planes, we can compare the benefits and penalties of different VTR algorithms and policies. The best VTR scenario can be adaptively chosen from a set of such algorithms and policies according to the real-time network situations. For this purpose, a cost model integrating all these factors is created to provide a comparison criterion independent of any specific VTR algorithm and policy. A case study based on simulation experiments is conducted to illustrate the application of our models.
Resumo:
Composites are engineered materials that take advantage of the particular properties of each of its two or more constituents. They are designed to be stronger, lighter and to last longer which can lead to the creation of safer protection gear, more fuel efficient transportation methods and more affordable materials, among other examples. This thesis proposes a numerical and analytical verification of an in-house developed multiscale model for predicting the mechanical behavior of composite materials with various configurations subjected to impact loading. This verification is done by comparing the results obtained with analytical and numerical solutions with the results found when using the model. The model takes into account the heterogeneity of the materials that can only be noticed at smaller length scales, based on the fundamental structural properties of each of the composite’s constituents. This model can potentially reduce or eliminate the need of costly and time consuming experiments that are necessary for material characterization since it relies strictly upon the fundamental structural properties of each of the composite’s constituents. The results from simulations using the multiscale model were compared against results from direct simulations using over-killed meshes, which considered all heterogeneities explicitly in the global scale, indicating that the model is an accurate and fast tool to model composites under impact loads. Advisor: David H. Allen
Resumo:
In this action research study of my 5th grade classroom, I investigated the benefits of a modified block schedule and departmentalization. The research consisted of dividing the 5th grade curriculum into three blocks. Each block consisted of two primary subject areas: Mathematics was paired with Social Studies, Reading was paired with Health, and Writing was paired with Science. These groupings were designed to accommodate district time-allotment requirements and the strengths of each teacher within the 5th grade team. Thus, one teacher taught all of the Mathematics and Social Studies, another all of the Reading and Health, and another all of the Writing and Science. Students had classes with each teacher, each school day. I discovered that this departmentalization had many benefits to both students and teachers. As a result of this research, we plan to continue with our new schedule and further develop it to more fully exploit the educational and professional advantages we found to be a part of the project.
Resumo:
Molecular Dynamics (MD) simulation is one of the most important computational techniques with broad applications in physics, chemistry, chemical engineering, materials design and biological science. Traditional computational chemistry refers to quantum calculations based on solving Schrodinger equations. Later developed Density Functional Theory (DFT) based on solving Kohn-Sham equations became the more popular ab initio calculation technique which could deal with ~1000 atoms by explicitly considering electron interactions. In contrast, MD simulation based on solving classical mechanics equations of motion is a totally different technique in the field of computational chemistry. Electron interactions were implicitly included in the empirical atom-based potential functions and the system size to be investigated can be extended to ~106 atoms. The thermodynamic properties of model fluids are mainly determined by macroscopic quantities, like temperature, pressure, density. The quantum effects on thermodynamic properties like melting point, surface tension are not dominant. In this work, we mainly investigated the melting point, surface tension (liquid-vapor and liquid-solid) of model fluids including Lennard-Jones model, Stockmayer model and a couple of water models (TIP4P/Ew, TIP5P/Ew) by means of MD simulation. In addition, some new structures of water confined in carbon nanotube were discovered and transport behaviors of water and ions through nano-channels were also revealed.
Resumo:
Preservation of rivers and water resources is crucial in most environmental policies and many efforts are made to assess water quality. Environmental monitoring of large river networks are based on measurement stations. Compared to the total length of river networks, their number is often limited and there is a need to extend environmental variables that are measured locally to the whole river network. The objective of this paper is to propose several relevant geostatistical models for river modeling. These models use river distance and are based on two contrasting assumptions about dependency along a river network. Inference using maximum likelihood, model selection criterion and prediction by kriging are then developed. We illustrate our approach on two variables that differ by their distributional and spatial characteristics: summer water temperature and nitrate concentration. The data come from 141 to 187 monitoring stations in a network on a large river located in the Northeast of France that is more than 5000 km long and includes Meuse and Moselle basins. We first evaluated different spatial models and then gave prediction maps and error variance maps for the whole stream network.
Resumo:
Wildlife biologists are often interested in how an animal uses space and the habitat resources within that space. We propose a single model that estimates an animal’s home range and habitat selection parameters within that range while accounting for the inherent autocorrelation in frequently sampled telemetry data. The model is applied to brown bear telemetry data in southeast Alaska.
Resumo:
We consider a fully model-based approach for the analysis of distance sampling data. Distance sampling has been widely used to estimate abundance (or density) of animals or plants in a spatially explicit study area. There is, however, no readily available method of making statistical inference on the relationships between abundance and environmental covariates. Spatial Poisson process likelihoods can be used to simultaneously estimate detection and intensity parameters by modeling distance sampling data as a thinned spatial point process. A model-based spatial approach to distance sampling data has three main benefits: it allows complex and opportunistic transect designs to be employed, it allows estimation of abundance in small subregions, and it provides a framework to assess the effects of habitat or experimental manipulation on density. We demonstrate the model-based methodology with a small simulation study and analysis of the Dubbo weed data set. In addition, a simple ad hoc method for handling overdispersion is also proposed. The simulation study showed that the model-based approach compared favorably to conventional distance sampling methods for abundance estimation. In addition, the overdispersion correction performed adequately when the number of transects was high. Analysis of the Dubbo data set indicated a transect effect on abundance via Akaike’s information criterion model selection. Further goodness-of-fit analysis, however, indicated some potential confounding of intensity with the detection function.
Resumo:
The emerging Cyber-Physical Systems (CPSs) are envisioned to integrate computation, communication and control with the physical world. Therefore, CPS requires close interactions between the cyber and physical worlds both in time and space. These interactions are usually governed by events, which occur in the physical world and should autonomously be reflected in the cyber-world, and actions, which are taken by the CPS as a result of detection of events and certain decision mechanisms. Both event detection and action decision operations should be performed accurately and timely to guarantee temporal and spatial correctness. This calls for a flexible architecture and task representation framework to analyze CP operations. In this paper, we explore the temporal and spatial properties of events, define a novel CPS architecture, and develop a layered spatiotemporal event model for CPS. The event is represented as a function of attribute-based, temporal, and spatial event conditions. Moreover, logical operators are used to combine different types of event conditions to capture composite events. To the best of our knowledge, this is the first event model that captures the heterogeneous characteristics of CPS for formal temporal and spatial analysis.
Resumo:
Objective: To determine current food handling practices, knowledge and beliefs of primary food handlers with children 10 years old and the relationship between these components. Design: Surveys were developed based on FightBac!™ concepts and the Health Belief Model (HBM) construct. Participants: The majority of participants (n= 503) were females (67%), Caucasians (80%), aged between 30 to 49 years old (83%), had one or two children (83%), prepared meals all or most of the time (76%) and consumed meals away from home three times or less per week (66%). Analysis: Descriptive statistics and inferential statistics using Spearman’s rank correlation coefficient (rho) (p<0.05 and one-tail) and Chi-square were used to examine frequency and correlations. Results: Few participants reached the food safety objectives of Healthy People 2010 for safe food handling practices (79%). Mixed results were reported for perceived susceptibility. Only half of the participants (53-54%) reported high perceived severity for their children if they contracted food borne illness. Most participants were confident of their food handling practices for their children (91%) and would change their food handling practices if they or their family members previously experienced food poisoning (79%). Participants’ reasons for high self-efficacy were learning from their family and independently acquiring knowledge and skills from the media, internet or job. The three main barriers to safe food handling were insufficient time, lots of distractions and lack of control of the food handling practices of other people in the household. Participants preferred to use food safety information that is easy to understand, has scientific facts, causes feelings of health-threat and has lots of pictures or visuals. Participants demonstrate high levels of knowledge in certain areas of the FightBac!TM concepts but lacked knowledge in other areas. Knowledge and cues to action were most supportive of the HBM construct, while perceived susceptibility was least supportive of the HBM construct. Conclusion: Most participants demonstrate many areas to improve in their food handling practices, knowledge and beliefs. Adviser: Julie A. Albrecht
Resumo:
The ability to utilize information systems (IS) effectively is becoming a necessity for business professionals. However, individuals differ in their abilities to use IS effectively, with some achieving exceptional performance in IS use and others being unable to do so. Therefore, developing a set of skills and attributes to achieve IS user competency, or the ability to realize the fullest potential and the greatest performance from IS use, is important. Various constructs have been identified in the literature to describe IS users with regard to their intentions to use IS and their frequency of IS usage, but studies to describe the relevant characteristics associated with highly competent IS users, or those who have achieved IS user competency, are lacking. This research develops a model of IS user competency by using the Repertory Grid Technique to identify a broad set of characteristics of highly competent IS users. A qualitative analysis was carried out to identify categories and sub-categories of these characteristics. Then, based on the findings, a subset of the model of IS user competency focusing on the IS-specific factors – domain knowledge of and skills in IS, willingness to try and to explore IS, and perception of IS value – was developed and validated using the survey approach. The survey findings suggest that all three factors are relevant and important to IS user competency, with willingness to try and to explore IS being the most significant factor. This research generates a rich set of factors explaining IS user competency, such as perception of IS value. The results not only highlight characteristics that can be fostered in IS users to improve their performance with IS use, but also present research opportunities for IS training and potential hiring criteria for IS users in organizations.
Resumo:
Evaluations of measurement invariance provide essential construct validity evidence. However, the quality of such evidence is partly dependent upon the validity of the resulting statistical conclusions. The presence of Type I or Type II errors can render measurement invariance conclusions meaningless. The purpose of this study was to determine the effects of categorization and censoring on the behavior of the chi-square/likelihood ratio test statistic and two alternative fit indices (CFI and RMSEA) under the context of evaluating measurement invariance. Monte Carlo simulation was used to examine Type I error and power rates for the (a) overall test statistic/fit indices, and (b) change in test statistic/fit indices. Data were generated according to a multiple-group single-factor CFA model across 40 conditions that varied by sample size, strength of item factor loadings, and categorization thresholds. Seven different combinations of model estimators (ML, Yuan-Bentler scaled ML, and WLSMV) and specified measurement scales (continuous, censored, and categorical) were used to analyze each of the simulation conditions. As hypothesized, non-normality increased Type I error rates for the continuous scale of measurement and did not affect error rates for the categorical scale of measurement. Maximum likelihood estimation combined with a categorical scale of measurement resulted in more correct statistical conclusions than the other analysis combinations. For the continuous and censored scales of measurement, the Yuan-Bentler scaled ML resulted in more correct conclusions than normal-theory ML. The censored measurement scale did not offer any advantages over the continuous measurement scale. Comparing across fit statistics and indices, the chi-square-based test statistics were preferred over the alternative fit indices, and ΔRMSEA was preferred over ΔCFI. Results from this study should be used to inform the modeling decisions of applied researchers. However, no single analysis combination can be recommended for all situations. Therefore, it is essential that researchers consider the context and purpose of their analyses.
Resumo:
Stage-structured population models predict transient population dynamics if the population deviates from the stable stage distribution. Ecologists’ interest in transient dynamics is growing because populations regularly deviate from the stable stage distribution, which can lead to transient dynamics that differ significantly from the stable stage dynamics. Because the structure of a population matrix (i.e., the number of life-history stages) can influence the predicted scale of the deviation, we explored the effect of matrix size on predicted transient dynamics and the resulting amplification of population size. First, we experimentally measured the transition rates between the different life-history stages and the adult fecundity and survival of the aphid, Acythosiphon pisum. Second, we used these data to parameterize models with different numbers of stages. Third, we compared model predictions with empirically measured transient population growth following the introduction of a single adult aphid. We find that the models with the largest number of life-history stages predicted the largest transient population growth rates, but in all models there was a considerable discrepancy between predicted and empirically measured transient peaks and a dramatic underestimation of final population sizes. For instance, the mean population size after 20 days was 2394 aphids compared to the highest predicted population size of 531 aphids; the predicted asymptotic growth rate (λmax) was consistent with the experiments. Possible explanations for this discrepancy are discussed. Includes 4 supplemental files.
Resumo:
The enzymatically catalyzed template-directed extension of ssDNA/primer complex is an impor-tant reaction of extraordinary complexity. The DNA polymerase does not merely facilitate the insertion of dNMP, but it also performs rapid screening of substrates to ensure a high degree of fidelity. Several kinetic studies have determined rate constants and equilibrium constants for the elementary steps that make up the overall pathway. The information is used to develop a macro-scopic kinetic model, using an approach described by Ninio [Ninio J., 1987. Alternative to the steady-state method: derivation of reaction rates from first-passage times and pathway probabili-ties. Proc. Natl. Acad. Sci. U.S.A. 84, 663–667]. The principle idea of the Ninio approach is to track a single template/primer complex over time and to identify the expected behavior. The average time to insert a single nucleotide is a weighted sum of several terms, in-cluding the actual time to insert a nucleotide plus delays due to polymerase detachment from ei-ther the ternary (template-primer-polymerase) or quaternary (+nucleotide) complexes and time delays associated with the identification and ultimate rejection of an incorrect nucleotide from the binding site. The passage times of all events and their probability of occurrence are ex-pressed in terms of the rate constants of the elementary steps of the reaction pathway. The model accounts for variations in the average insertion time with different nucleotides as well as the in-fluence of G+C content of the sequence in the vicinity of the insertion site. Furthermore the model provides estimates of error frequencies. If nucleotide extension is recognized as a compe-tition between successful insertions and time delaying events, it can be described as a binomial process with a probability distribution. The distribution gives the probability to extend a primer/template complex with a certain number of base pairs and in general it maps annealed complexes into extension products.
Resumo:
The starting point of this article is the question "How to retrieve fingerprints of rhythm in written texts?" We address this problem in the case of Brazilian and European Portuguese. These two dialects of Modern Portuguese share the same lexicon and most of the sentences they produce are superficially identical. Yet they are conjectured, on linguistic grounds, to implement different rhythms. We show that this linguistic question can be formulated as a problem of model selection in the class of variable length Markov chains. To carry on this approach, we compare texts from European and Brazilian Portuguese. These texts are previously encoded according to some basic rhythmic features of the sentences which can be automatically retrieved. This is an entirely new approach from the linguistic point of view. Our statistical contribution is the introduction of the smallest maximizer criterion which is a constant free procedure for model selection. As a by-product, this provides a solution for the problem of optimal choice of the penalty constant when using the BIC to select a variable length Markov chain. Besides proving the consistency of the smallest maximizer criterion when the sample size diverges, we also make a simulation study comparing our approach with both the standard BIC selection and the Peres-Shields order estimation. Applied to the linguistic sample constituted for our case study, the smallest maximizer criterion assigns different context-tree models to the two dialects of Portuguese. The features of the selected models are compatible with current conjectures discussed in the linguistic literature.