84 resultados para Measurement based model identification


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Recently effective connectivity studies have gained significant attention among the neuroscience community as Electroencephalography (EEG) data with a high time resolution can give us a wider understanding of the information flow within the brain. Among other tools used in effective connectivity analysis Granger Causality (GC) has found a prominent place. The GC analysis, based on strictly causal multivariate autoregressive (MVAR) models does not account for the instantaneous interactions among the sources. If instantaneous interactions are present, GC based on strictly causal MVAR will lead to erroneous conclusions on the underlying information flow. Thus, the work presented in this paper applies an extended MVAR (eMVAR) model that accounts for the zero lag interactions. We propose a constrained adaptive Kalman filter (CAKF) approach for the eMVAR model identification and demonstrate that this approach performs better than the short time windowing-based adaptive estimation when applied to information flow analysis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Mobile eLearning (mLearning) can create a revolution in eLearning with the popularity of smart mobile devices and Application. However, contents are the king to make this revolution happen. Moreover, for an effective mLearning system, analytical aspects such as, quality of contents, quality of results, performance of learners, needs to be addressed. This paper presents a framework for personal mLearning. In this paper, we have used graph-based model called bipartite graph for content authentication and identification of the quality of results. Furthermore, we have used statistical estimation process for trustworthiness of weights in the bipartite graph using confidence interval and hypothesis test as analytical decision model tool.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objective:
On-going evidence is required to support the validity of inferences about change and group differences in the
evaluation of health programs, particularly when self-report scales requiring substantial subjectivity in response generation are used as outcome measures. Following this reasoning, the aim of this study was to replicate the factor structure and investigate the measurement invariance of the latest version of the Health Education Impact Questionnaire, a widely used health program evaluation measure.
Methods:
An archived dataset of responses to the most recent version of the English-language Health Education Impact
Questionnaire that uses four rather than six response options (N=3221) was analysed using exploratory structural equation
modelling and confirmatory factor analysis appropriate for ordered categorical data. Metric and scalar invariance were
studied following recent recommendations in the literature to apply fully invariant unconditional models with minimum
constraints necessary for model identification.
Results:
The original eight-factor structure was replicated and all but one of the scales (Self Monitoring and Insight) was
found to consist of unifactorial items with reliability of ⩾0.8 and satisfactory discriminant validity. Configural, metric and scalar
invariance were established across pre-test to post-test and population sub-groups (sex, age, education, ethnic background).
Conclusion:
The results support the high level of interest in the Health Education Impact Questionnaire, particularly for use as a pre-test/post-test measure in experimental studies, other pre–post evaluation designs and system-level monitoring and evaluation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we proposed a Data Translation model which potentially is a major promising web service of the next generation world wide web. This technique is somehow analogy to the technique of traditional machine translation but it is far beyond what we understand about machine translation in the past and nowadays in terms of the scope and the contents. To illustrate the new concept of web services based data translation, a multilingual machine translation electronic dictionary system and its web services based model including generic services, multilingual translation services are presented. This proposed data translation model aims at achieving better web services in easiness, convenience, efficiency, and higher accuracy, scalability, self-learning, self-adapting.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

How to provide cost-effective strategies for Software Testing has been one of the research focuses in Software Engineering for a long time. Many researchers in Software Engineering have addressed the effectiveness and quality metric of Software Testing, and many interesting results have been obtained. However, one issue of paramount importance in software testing – the intrinsic imprecise and uncertain relationships within testing metrics – is left unaddressed. To this end, a new quality and effectiveness measurement based on fuzzy logic is proposed. The software quality features and analogy-based reasoning are discussed, which can deal with quality and effectiveness consistency between different test projects. Experimental results are also provided to verify the proposed measurement.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, a multi-agent based model for a robotic assembly system is presented. Firstly, an organization model is used to construct the multi-agent model. Secondly, a dynamic self-organizing method is then put forward for the multi-agent robotic system to bid and contract the operations. Thirdly, a real multi-agent robotic system is built and assembly experiments are carried out. Finally, the experimental results confirm that the present multi-agent robotic system has flexibility, adaptation and stability.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Much research in teacher education has concentrated on individual elements of effective teaching such as the best way to teach content. There has been less emphasis on understanding the complex process of effective teaching in its entirety. Teacher educators are in the business of creating effective teachers and as such need a clear, evidence based model of an effective teacher. We believe that current models of effective teachers are limited because they fail to give sufficient emphasis to many important aspects of effective teachers and fail to integrate these components into a coherent whole and so provide a language for discussion of and a conceptual framework for developing teacher education.

This paper discusses the elements needed for a model of an effective teacher. This model emphasises not only the domains of effective teaching which receive most of the attention in teacher education and evaluation, namely content knowledge, pedagogical knowledge and, more recently, pedagogical content knowledge but also takes into account the teacher's personal knowledge and knowledge of context. We suggest that it is not just this knowledge that teachers have in these domains but the way this knowledge overlaps and interacts both within the teacher and with the teacher's physical, social, intellectual and emotional environment.

An examination of the effective teacher challenges not only teacher educators to rethink the way we educate both preservice and inservice teachers; but also the way we assess, judge and reward teachers.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

How to provide cost-effective strategies for Software Testing has been one of the research focuses in Software Engineering for a long time. Many researchers in Software Engineering have addressed the effectiveness and quality metric of Software Testing, and many interesting results have been obtained. However, one issue of paramount importance in software testing — the intrinsic imprecise and uncertain relationships within testing metrics — is left unaddressed. To this end, a new quality and effectiveness measurement based on fuzzy logic is proposed. Related issues like the software quality features and fuzzy reasoning for test project similarity measurement are discussed, which can deal with quality and effectiveness consistency between different test projects. Experiments were conducted to verify the proposed measurement using real data from actual software testing projects. Experimental results show that the proposed fuzzy logic based metrics is effective and efficient to measure and evaluate the quality and effectiveness of test projects.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Recently, much attention has been given to the mass spectrometry (MS) technology based disease classification, diagnosis, and protein-based biomarker identification. Similar to microarray based investigation, proteomic data generated by such kind of high-throughput experiments are often with high feature-to-sample ratio. Moreover, biological information and pattern are compounded with data noise, redundancy and outliers. Thus, the development of algorithms and procedures for the analysis and interpretation of such kind of data is of paramount importance. In this paper, we propose a hybrid system for analyzing such high dimensional data. The proposed method uses the k-mean clustering algorithm based feature extraction and selection procedure to bridge the filter selection and wrapper selection methods. The potential informative mass/charge (m/z) markers selected by filters are subject to the k-mean clustering algorithm for correlation and redundancy reduction, and a multi-objective Genetic Algorithm selector is then employed to identify discriminative m/z markers generated by k-mean clustering algorithm. Experimental results obtained by using the proposed method indicate that it is suitable for m/z biomarker selection and MS based sample classification.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The recent emergence of intelligent agent technology and advances in information gathering have been the important steps forward in efficiently managing and using the vast amount of information now available on the Web to make informed decisions. There are, however, still many problems that need to be overcome in the information gathering research arena to enable the delivery of relevant information required by end users. Good decisions cannot be made without sufficient, timely, and correct information. Traditionally it is said that knowledge is power, however, nowadays sufficient, timely, and correct information is power. So gathering relevant information to meet user information needs is the crucial step for making good decisions. The ideal goal of information gathering is to obtain only the information that users need (no more and no less). However, the volume of information available, diversity formats of information, uncertainties of information, and distributed locations of information (e.g. World Wide Web) hinder the process of gathering the right information to meet the user needs. Specifically, two fundamental issues in regard to efficiency of information gathering are mismatch and overload. The mismatch means some information that meets user needs has not been gathered (or missed out), whereas, the overload means some gathered information is not what users need. Traditional information retrieval has been developed well in the past twenty years. The introduction of the Web has changed people's perceptions of information retrieval. Usually, the task of information retrieval is considered to have the function of leading the user to those documents that are relevant to his/her information needs. The similar function in information retrieval is to filter out the irrelevant documents (or called information filtering). Research into traditional information retrieval has provided many retrieval models and techniques to represent documents and queries. Nowadays, information is becoming highly distributed, and increasingly difficult to gather. On the other hand, people have found a lot of uncertainties that are contained in the user information needs. These motivate the need for research in agent-based information gathering. Agent-based information systems arise at this moment. In these kinds of systems, intelligent agents will get commitments from their users and act on the users behalf to gather the required information. They can easily retrieve the relevant information from highly distributed uncertain environments because of their merits of intelligent, autonomy and distribution. The current research for agent-based information gathering systems is divided into single agent gathering systems, and multi-agent gathering systems. In both research areas, there are still open problems to be solved so that agent-based information gathering systems can retrieve the uncertain information more effectively from the highly distributed environments. The aim of this thesis is to research the theoretical framework for intelligent agents to gather information from the Web. This research integrates the areas of information retrieval and intelligent agents. The specific research areas in this thesis are the development of an information filtering model for single agent systems, and the development of a dynamic belief model for information fusion for multi-agent systems. The research results are also supported by the construction of real information gathering agents (e.g., Job Agent) for the Internet to help users to gather useful information stored in Web sites. In such a framework, information gathering agents have abilities to describe (or learn) the user information needs, and act like users to retrieve, filter, and/or fuse the information. A rough set based information filtering model is developed to address the problem of overload. The new approach allows users to describe their information needs on user concept spaces rather than on document spaces, and it views a user information need as a rough set over the document space. The rough set decision theory is used to classify new documents into three regions: positive region, boundary region, and negative region. Two experiments are presented to verify this model, and it shows that the rough set based model provides an efficient approach to the overload problem. In this research, a dynamic belief model for information fusion in multi-agent environments is also developed. This model has a polynomial time complexity, and it has been proven that the fusion results are belief (mass) functions. By using this model, a collection fusion algorithm for information gathering agents is presented. The difficult problem for this research is the case where collections may be used by more than one agent. This algorithm, however, uses the technique of cooperation between agents, and provides a solution for this difficult problem in distributed information retrieval systems. This thesis presents the solutions to the theoretical problems in agent-based information gathering systems, including information filtering models, agent belief modeling, and collection fusions. It also presents solutions to some of the technical problems in agent-based information systems, such as document classification, the architecture for agent-based information gathering systems, and the decision in multiple agent environments. Such kinds of information gathering agents will gather relevant information from highly distributed uncertain environments.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Generalized Estimating Equations (GEE) method is one of the most commonly used statistical methods for the analysis of longitudinal data in epidemiological studies. A working correlation structure for the repeated measures of the outcome variable of a subject needs to be specified by this method. However, statistical criteria for selecting the best correlation structure and the best subset of explanatory variables in GEE are only available recently because the GEE method is developed on the basis of quasi-likelihood theory. Maximum likelihood based model selection methods, such as the widely used Akaike Information Criterion (AIC), are not applicable to GEE directly. Pan (2001) proposed a selection method called QIC which can be used to select the best correlation structure and the best subset of explanatory variables. Based on the QIC method, we developed a computing program to calculate the QIC value for a range of different distributions, link functions and correlation structures. This program was written in Stata software. In this article, we introduce this program and demonstrate how to use it to select the most parsimonious model in GEE analyses of longitudinal data through several representative examples.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Challenges the existing normal science in project management with its limitations of high certainty in scope definition and develops a contingency based model for project / program management (PROJAM). The PROJAM concept is theoretically supported through the identification of eight inter related aspects - uncertainty, maturity, stakeholders, pareto, reward, transparency, partnering, and program management. The ideas presented represent a radical change to the eisting project management body of knowledge and expand the scope of project management to deal with complex projects / programs including organisational change, core function outsourcing (asset management) and IT hardware and software outsourcing.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Bangladesh exemplifies the complex challenges facing densely populated coastal regions. The
pressures on the country are immense: around 145 million people live within an area of just 145,000 sq-km at
the confluence of three major river systems: the Ganges, the Brahmaputra and the Meghna. While progress
has been made, poverty remains widespread, with around 39% of children under five malnourished. Most of
its land-mass lies below 10m above sea level with considerable areas at sea level, leading to frequent and
prolonged flooding during the monsoons. Sea level rise is leading to more flooding as storm surges rise off
higher sea levels, pushing further inland. Higher sea levels also result in salt-water intrusion into freshwater
coastal aquifers and estuaries, contaminating drinking water and farmland. Warmer ocean waters are also
expected to lead to an increase in the intensity of tropical storms.
Bangladesh depends on the South Asian summer monsoon for most of its rainfall which is expected to
increase, leading to more flooding. Climate scientists are also concerned about the stability of monsoon and
the potential for it to undergo a nonlinear phase shift to a drier regime. Bangladesh faces an additional
hydrological challenge in that the Ganges and Brahmaputra rivers both rise in the Himalaya-Tibetan Plateau
region, where glaciers are melting rapidly. The Intergovernmental Panel on Climate Change (IPCC)
concluded that rapid melting is expected to increase river flows until around the late-2030s, by which time
the glaciers are expected to have shrunk from their 1995 extent of 500,000 sq-km to an expected 100,000 sqkm.
After the 2030s, river flows could drop dramatically, turning the great glacier-fed rivers of Asia into
seasonal monsoon-fed rivers. The IPCC concluded that as a result, water shortages in Asia could affect more
than a billion people by the 2050s. Over the same period, crop yields are expected to decline by up to 30% in
South Asia due to a combination of drought and crop heat stress. Bangladesh is therefore likely to face
substantial challenges in the coming decades.
In order to adequately understand the complex, dynamic, spatial and nonlinear challenges facing Bangladesh,
an integrated model of the system is required. An agent-based model (ABM) permits the dynamic
interactions of the economic, social, political, geographic, environmental and epidemiological dimensions of
climate change impacts and adaptation policies to be integrated via a modular approach. Integrating these
dimensions, including nonlinear threshold events such as mass migrations, or the outbreak of conflicts or
epidemics, is possible to a far greater degree with an ABM than with most other approaches.
We are developing a prototype ABM, implemented in Netlogo, to examine the dynamic impacts on poverty,
migration, mortality and conflict from climate change in Bangladesh from 2001 to 2100. The model employs
GIS and sub-district level census and economic data and a coarse-graining methodology to allow model
statistics to be generated on a national scale from local dynamic interactions. This approach allows a more
realistic treatment of distributed spatial events and heterogeneity across the country. The aim is not to
generate precise predictions of Bangladesh’s evolution, but to develop a framework that can be used for
integrated scenario exploration. This paper represents an initial report on progress on this project. So far the
prototype model has demonstrated the desirability and feasibility of integrating the different dimensions of
the complex adaptive system and, once completed, is intended to be used as the basis for a more detailed
policy-oriented model.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Recent developments in ecological statistics have reached behavioral ecology, and an increasing number of studies now apply analytical tools that incorporate alternatives to the conventional null hypothesis testing based on significance levels. However, these approaches continue to receive mixed support in our field. Because our statistical choices can influence research design and the interpretation of data, there is a compelling case for reaching consensus on statistical philosophy and practice. Here, we provide a brief overview of the recently proposed approaches and open an online forum for future discussion (https://bestat.ecoinformatics.org/). From the perspective of practicing behavioral ecologists relying on either correlative or experimental data, we review the most relevant features of information theoretic approaches, Bayesian inference, and effect size statistics. We also discuss concerns about data quality, missing data, and repeatability. We emphasize the necessity of moving away from a heavy reliance on statistical significance while focusing attention on biological relevance and effect sizes, with the recognition that uncertainty is an inherent feature of biological data. Furthermore, we point to the importance of integrating previous knowledge in the current analysis, for which novel approaches offer a variety of tools. We note, however, that the drawbacks and benefits of these approaches have yet to be carefully examined in association with behavioral data. Therefore, we encourage a philosophical change in the interpretation of statistical outcomes, whereas we still retain a pluralistic perspective for making objective statistical choices given the uncertainties around different approaches in behavioral ecology. We provide recommendations on how these concepts could be made apparent in the presentation of statistical outputs in scientific papers.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Introducing haptic interface to conduct microrobotic intracellular injection has many beneficial implications. In particular, the haptic device provides force feedback to the bio-operator's hand. This paper introduces a 3D particle-based model to simulate the deformation of the cell membrane and corresponding cellular forces during microrobotic cell injection. The model is based on the kinematic and dynamic of spring – damper multi particle joints considering visco-elastic fluidic properties. It simulates the indentation force feedback as well as cell visual deformation during the microinjection. The model is verified using experimental data of zebrafish embryo microinjection. The results demonstrate that the developed cell model is capable of estimating zebrafish embryo deformation and force feedback accurately.