920 resultados para INTELLIGENCE SYSTEMS METHODOLOGY
Resumo:
Introduction. Feature usage is a pre-requisite to realising the benefits of investments in feature rich systems. We propose that conceptualising the dependent variable 'system use' as 'level of use' and specifying it as a formative construct has greater value for measuring the post-adoption use of feature rich systems. We then validate the content of the construct as a first step in developing a research instrument to measure it. The context of our study is the post-adoption use of electronic medical records (EMR) by primary care physicians. Method. Initially, a literature review of the empirical context defines the scope based on prior studies. Having identified core features from the literature, they are further refined with the help of experts in a consensus seeking process that follows the Delphi technique. Results.The methodology was successfully applied to EMRs, which were selected as an example of feature rich systems. A review of EMR usage and regulatory standards provided the feature input for the first round of the Delphi process. A panel of experts then reached consensus after four rounds, identifying ten task-based features that would be indicators of level of use. Conclusions. To study why some users deploy more advanced features than others, theories of post-adoption require a rich formative dependent variable that measures level of use. We have demonstrated that a context sensitive literature review followed by refinement through a consensus seeking process is a suitable methodology to validate the content of this dependent variable. This is the first step of instrument development prior to statistical confirmation with a larger sample.
Resumo:
The Wetland and Wetland CH4 Intercomparison of Models Project (WETCHIMP) was created to evaluate our present ability to simulate large-scale wetland characteristics and corresponding methane (CH4) emissions. A multi-model comparison is essential to evaluate the key uncertainties in the mechanisms and parameters leading to methane emissions. Ten modelling groups joined WETCHIMP to run eight global and two regional models with a common experimental protocol using the same climate and atmospheric carbon dioxide (CO2) forcing datasets. We reported the main conclusions from the intercomparison effort in a companion paper (Melton et al., 2013). Here we provide technical details for the six experiments, which included an equilibrium, a transient, and an optimized run plus three sensitivity experiments (temperature, precipitation, and atmospheric CO2 concentration). The diversity of approaches used by the models is summarized through a series of conceptual figures, and is used to evaluate the wide range of wetland extent and CH4 fluxes predicted by the models in the equilibrium run. We discuss relationships among the various approaches and patterns in consistencies of these model predictions. Within this group of models, there are three broad classes of methods used to estimate wetland extent: prescribed based on wetland distribution maps, prognostic relationships between hydrological states based on satellite observations, and explicit hydrological mass balances. A larger variety of approaches was used to estimate the net CH4 fluxes from wetland systems. Even though modelling of wetland extent and CH4 emissions has progressed significantly over recent decades, large uncertainties still exist when estimating CH4 emissions: there is little consensus on model structure or complexity due to knowledge gaps, different aims of the models, and the range of temporal and spatial resolutions of the models.
Resumo:
This thesis is concerned with development of improved management practices in indigenous chicken production systems in a research process that includes participatory approaches with smallholder farmers and other stakeholders in Kenya. The research process involved a wide range of activities that included on-station experiments, field surveys, stakeholder consultations in workshops, seminars and visits, and on-farm farmer participatory research to evaluate the effect of some improved management interventions on production performance of indigenous chickens. The participatory research was greatly informed from collective experiences and lessons of the previous activities. The on-station studies focused on hatching, growth and nutritional characteristics of the indigenous chickens. Four research publications from these studies are included in this thesis. Quantitative statistical analyses were applied and they involved use of growth models estimated with non-linear regressions for the growth characteristics, chi-square determinations to investigate differences among different reciprocal crosses of indigenous chickens and general linear models and covariance determination for the nutrition study. The on-station studies brought greater understanding of performance and production characteristics of indigenous chickens and the influence of management practices on these characteristics. The field surveys and stakeholder consultations helped in understanding the overarching issues affecting the productivity of the indigenous chickens systems and their place in the livelihoods of smallholder farmers. These activities created strong networking opportunities with stakeholders from a wide spectrum. The on-farm farmer participatory research involved selection of 200 farmers in five regions followed by training and introduction of interventions on improved management practices which included housing, vaccination, deworming and feed supplementation. Implementation and monitoring was mainly done by individual farmers continuously for close to one and half years. Six quarterly visits to the farms were made by the research team to monitor and provide support for on-going project activities. The data collected has been analysed for 5 consecutive 3-monthly periods. Descriptive and inferential statistics were applied to analyse the data collected involving treatment applications, production characteristics and flock demography characteristics. Out of the 200 farmers initially selected, 173 had records on treatment applications and flock demography characteristics while 127 farmers had records on production characteristics. The demographic analysis with a dissimilarity index of flock size produced 7 distinct farm groups from among the 173 farms. Two of these farm groups were represented in similar numbers in each of the five regions. The research process also involved a number of dissemination and communication strategies that have brought the process and project outcomes into the domain of accessibility by wider readership locally and globally. These include workshops, seminars, field visits and consultations, local and international conferences, electronic conferencing, publications and personal communication via emailing and conventional posting. A number of research and development proposals were also developed based on the knowledge and experiences gained from the research process. The thesis captures the research process activities and outcomes in 8 chapters which include in ascending order – introduction, theoretical concepts underpinning FPR, research methodology and process, on-station research output, FPR descriptive statistical analysis, FPR inferential statistical analysis on production characteristics, FPR demographic analysis and conclusions. Various research approaches both quantitative and qualitative have been applied in the research process indicating the possibilities and importance of combining both systems for greater understanding of issues being studied. In our case, participatory studies of the improved management of indigenous chickens indicates their potential importance as livelihood assets for poor people.
Resumo:
Many communication signal processing applications involve modelling and inverting complex-valued (CV) Hammerstein systems. We develops a new CV B-spline neural network approach for efficient identification of the CV Hammerstein system and effective inversion of the estimated CV Hammerstein model. Specifically, the CV nonlinear static function in the Hammerstein system is represented using the tensor product from two univariate B-spline neural networks. An efficient alternating least squares estimation method is adopted for identifying the CV linear dynamic model’s coefficients and the CV B-spline neural network’s weights, which yields the closed-form solutions for both the linear dynamic model’s coefficients and the B-spline neural network’s weights, and this estimation process is guaranteed to converge very fast to a unique minimum solution. Furthermore, an accurate inversion of the CV Hammerstein system can readily be obtained using the estimated model. In particular, the inversion of the CV nonlinear static function in the Hammerstein system can be calculated effectively using a Gaussian-Newton algorithm, which naturally incorporates the efficient De Boor algorithm with both the B-spline curve and first order derivative recursions. The effectiveness of our approach is demonstrated using the application to equalisation of Hammerstein channels.
Resumo:
Automatic generation of classification rules has been an increasingly popular technique in commercial applications such as Big Data analytics, rule based expert systems and decision making systems. However, a principal problem that arises with most methods for generation of classification rules is the overfit-ting of training data. When Big Data is dealt with, this may result in the generation of a large number of complex rules. This may not only increase computational cost but also lower the accuracy in predicting further unseen instances. This has led to the necessity of developing pruning methods for the simplification of rules. In addition, classification rules are used further to make predictions after the completion of their generation. As efficiency is concerned, it is expected to find the first rule that fires as soon as possible by searching through a rule set. Thus a suit-able structure is required to represent the rule set effectively. In this chapter, the authors introduce a unified framework for construction of rule based classification systems consisting of three operations on Big Data: rule generation, rule simplification and rule representation. The authors also review some existing methods and techniques used for each of the three operations and highlight their limitations. They introduce some novel methods and techniques developed by them recently. These methods and techniques are also discussed in comparison to existing ones with respect to efficient processing of Big Data.
Resumo:
Purpose The research objective of this study is to understand how institutional changes to the EU regulatory landscape may affect corresponding institutionalized operational practices within financial organizations. Design/methodology/approach The study adopts an Investment Management System as its case and investigates different implementations of this system within eight financial organizations, predominantly focused on investment banking and asset management activities within capital markets. At the systems vendor site, senior systems consultants and client relationship managers were interviewed. Within the financial organizations, compliance, risk and systems experts were interviewed. Findings The study empirically tests modes of institutional change. Displacement and Layering were found to be the most prevalent modes. However, the study highlights how the outcomes of Displacement and Drift may be similar in effect as both modes may cause compliance gaps. The research highlights how changes in regulations may create gaps in systems and processes which, in the short term, need to be plugged by manual processes. Practical implications Vendors abilities to manage institutional change caused by Drift, Displacement, Layering and Conversion and their ability to efficiently and quickly translate institutional variables into structured systems has the power to ease the pain and cost of compliance as well as reducing the risk of breeches by reducing the need for interim manual systems. Originality/value The study makes a contribution by applying recent theoretical concepts of institutional change to the topic of regulatory change uses this analysis to provide insight into the effects of this new environment
Resumo:
This paper aims to assess the necessity of updating the intensity-duration-frequency (IDF) curves used in Portugal to design building storm-water drainage systems. A comparative analysis of the design was performed for the three predefined rainfall regions in Portugal using the IDF curves currently in use and estimated for future decades. Data for recent and future climate conditions simulated by a global and regional climate model chain are used to estimate possible changes of rainfall extremes and its implications for the drainage systems. The methodology includes the disaggregation of precipitation up to subhourly scales, the robust development of IDF curves, and the correction of model bias. Obtained results indicate that projected changes are largest for the plains in southern Portugal (5–33%) than for mountainous regions (3–9%) and that these trends are consistent with projected changes in the long-term 95th percentile of the daily precipitation throughout the 21st century. The authors conclude there is a need to review the current precipitation regime classification and change the new drainage systems towards larger dimensions to mitigate the projected changes in extreme precipitation.
Resumo:
In this work we construct reliable a posteriori estimates for some semi- (spatially) discrete discontinuous Galerkin schemes applied to nonlinear systems of hyperbolic conservation laws. We make use of appropriate reconstructions of the discrete solution together with the relative entropy stability framework, which leads to error control in the case of smooth solutions. The methodology we use is quite general and allows for a posteriori control of discontinuous Galerkin schemes with standard flux choices which appear in the approximation of conservation laws. In addition to the analysis, we conduct some numerical benchmarking to test the robustness of the resultant estimator.
Resumo:
Geotechnical systems, such as landfills, mine tailings storage facilities (TSFs), slopes, and levees, are required to perform safely throughout their service life, which can span from decades for levees to “in perpetuity” for TSFs. The conventional design practice by geotechnical engineers for these systems utilizes the as-built material properties to predict its performance throughout the required service life. The implicit assumption in this design methodology is that the soil properties are stable through time. This is counter to long-term field observations of these systems, particularly where ecological processes such as plant, animal, biological, and geochemical activity are present. Plant roots can densify soil and/or increase hydraulic conductivity, burrowing animals can increase seepage, biological activity can strengthen soil, geochemical processes can increase stiffness, etc. The engineering soil properties naturally change as a stable ecological system is gradually established following initial construction, and these changes alter system performance. This paper presents an integrated perspective and new approach to this issue, considering ecological, geotechnical, and mining demands and constraints. A series of data sets and case histories are utilized to examine these issues and to propose a more integrated design approach, and consideration is given to future opportunities to manage engineered landscapes as ecological systems. We conclude that soil scientists and restoration ecologists must be engaged in initial project design and geotechnical engineers must be active in long-term management during the facility’s service life. For near-surface geotechnical structures in particular, this requires an interdisciplinary perspective and the embracing of soil as a living ecological system rather than an inert construction material.
Resumo:
The components of many signaling pathways have been identified and there is now a need to conduct quantitative data-rich temporal experiments for systems biology and modeling approaches to better understand pathway dynamics and regulation. Here we present a modified Western blotting method that allows the rapid and reproducible quantification and analysis of hundreds of data points per day on proteins and their phosphorylation state at individual sites. The approach is of particular use where samples show a high degree of sample-to-sample variability such as primary cells from multiple donors. We present a case study on the analysis of >800 phosphorylation data points from three phosphorylation sites in three signaling proteins over multiple time points from platelets isolated from ten donors, demonstrating the technique's potential to determine kinetic and regulatory information from limited cell numbers and to investigate signaling variation within a population. We envisage the approach being of use in the analysis of many cellular processes such as signaling pathway dynamics to identify regulatory feedback loops and the investigation of potential drug/inhibitor responses, using primary cells and tissues, to generate information about how a cell's physiological state changes over time.
Resumo:
The aim of this work was to study the effect of the hydrolysis degree (HD) and the concentration (C(PVA)) Of two types of poly(vinyl alcohol) (PVA) and of the type (glycerol and sorbitol) and the concentration (C(P)) of plasticizers on some physical properties of biodegradable films based on blends of gelatin and PVA Using a response-surface methodology. The films were prepared with a film forming solutions (FFS) with 2 g of macromolecules (gelatin+PVA)/100 g de FFS. The responses analyzed were the mechanical properties, the solubility, the moisture Content. the color difference and the opacity. The linear model was statistically significant and predictive for puncture force and deformation. elongation at break, solubility in water, Moisture content and opacity. The CPVA affected strongly the elongation at break of the films. The interaction of the HD and the C(P) affected this property. Moreover. the puncture force was affected slightly by the C(PVA). Concerning the Solubility in water, the reduction of the HD increased it and this effect was greater for high CPVA Values. In general. the most important effect observed in the physical properties of the films was that of the plasticizer type and concentration. The PVA hydrolysis degree and concentration have an important effect only for the elongation at break, puncture deformation and solubility in water. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
Case-Based Reasoning is a methodology for problem solving based on past experiences. This methodology tries to solve a new problem by retrieving and adapting previously known solutions of similar problems. However, retrieved solutions, in general, require adaptations in order to be applied to new contexts. One of the major challenges in Case-Based Reasoning is the development of an efficient methodology for case adaptation. The most widely used form of adaptation employs hand coded adaptation rules, which demands a significant knowledge acquisition and engineering effort. An alternative to overcome the difficulties associated with the acquisition of knowledge for case adaptation has been the use of hybrid approaches and automatic learning algorithms for the acquisition of the knowledge used for the adaptation. We investigate the use of hybrid approaches for case adaptation employing Machine Learning algorithms. The approaches investigated how to automatically learn adaptation knowledge from a case base and apply it to adapt retrieved solutions. In order to verify the potential of the proposed approaches, they are experimentally compared with individual Machine Learning techniques. The results obtained indicate the potential of these approaches as an efficient approach for acquiring case adaptation knowledge. They show that the combination of Instance-Based Learning and Inductive Learning paradigms and the use of a data set of adaptation patterns yield adaptations of the retrieved solutions with high predictive accuracy.
Resumo:
Component-based software engineering has recently emerged as a promising solution to the development of system-level software. Unfortunately, current approaches are limited to specific platforms and domains. This lack of generality is particularly problematic as it prevents knowledge sharing and generally drives development costs up. In the past, we have developed a generic approach to component-based software engineering for system-level software called OpenCom. In this paper, we present OpenComL an instantiation of OpenCom to Linux environments and show how it can be profiled to meet a range of system-level software in Linux environments. For this, we demonstrate its application to constructing a programmable router platform and a middleware for parallel environments.