853 resultados para Process Management, Maturity Model, CMM, Delphi Study
Resumo:
Socio-economic and demographic changes among family forest owners and demands for versatile forestry decision aid motivated this study, which sought grounds for owner-driven forest planning. Finnish family forest owners’ forest-related decision making was analyzed in two interview-based qualitative studies, the main findings of which were surveyed quantitatively. Thereafter, a scheme for adaptively mixing methods in individually tailored decision support processes was constructed. The first study assessed owners’ decision-making strategies by examining varying levels of the sharing of decision-making power and the desire to learn. Five decision-making modes – trusting, learning, managing, pondering, and decisive – were discerned and discussed against conformable decision-aid approaches. The second study conceptualized smooth communication and assessed emotional, practical, and institutional boosters of and barriers to such smoothness in communicative decision support. The results emphasize the roles of trust, comprehension, and contextual services in owners’ communicative decision making. In the third study, a questionnaire tool to measure owners’ attitudes towards communicative planning was constructed by using trusting, learning, and decisive dimensions. Through a multivariate analysis of survey data, three owner groups were identified as fusions of the original decision-making modes: trusting learners (53%), decisive learners (27%), and decisive managers (20%). Differently weighted communicative services are recommended for these compound wishes. The findings of the studies above were synthesized in a form of adaptive decision analysis (ADA), which allows and encourages the decision-maker (owner) to make deliberate choices concerning the phases of a decision aid (planning) process. The ADA model relies on adaptability and feedback management, which foster smooth communication with the owner and (inter-)organizational learning of the planning institution(s). The summarized results indicate that recognizing the communication-related amenity values of family forest owners may be crucial in developing planning and extension services. It is therefore recommended that owners, root-level planners, consultation professionals, and pragmatic researchers collaboratively continue to seek stable change.
Resumo:
Given the limited resources available for weed management, a strategic approach is required to give the best bang for your buck. The current study incorporates: (1) a model ensemble approach to identify areas of uncertainty and commonality regarding a species invasive potential, (2) current distribution of the invaded species, and (3) connectivity of systems to identify target regions and focus efforts for more effective management. Uncertainty in the prediction of suitable habitat for H. amplexicaulis (study species) in Australia was addressed in an ensemble-forecasting approach to compare distributional scenarios from four models (CLIMATCH; CLIMEX; boosted regression trees [BRT]; maximum entropy [Maxent]). Models were built using subsets of occurrence and environmental data. Catchment risk was determined through incorporating habitat suitability, the current abundance and distribution of H. amplexicaulis, and catchment connectivity. Our results indicate geographic differences between predictions of different approaches. Despite these differences a number of catchments in northern, central, and southern Australia were identified as high risk of invasion or further spread by all models suggesting they should be given priority for the management of H. amplexicaulis. The study also highlighted the utility of ensemble approaches in indentifying areas of uncertainty and commonality regarding the species invasive potential.
Resumo:
This paper addresses the problem of discovering business process models from event logs. Existing approaches to this problem strike various tradeoffs between accuracy and understandability of the discovered models. With respect to the second criterion, empirical studies have shown that block-structured process models are generally more understandable and less error-prone than unstructured ones. Accordingly, several automated process discovery methods generate block-structured models by construction. These approaches however intertwine the concern of producing accurate models with that of ensuring their structuredness, sometimes sacrificing the former to ensure the latter. In this paper we propose an alternative approach that separates these two concerns. Instead of directly discovering a structured process model, we first apply a well-known heuristic technique that discovers more accurate but sometimes unstructured (and even unsound) process models, and then transform the resulting model into a structured one. An experimental evaluation shows that our “discover and structure” approach outperforms traditional “discover structured” approaches with respect to a range of accuracy and complexity measures.
Resumo:
Detecting Earnings Management Using Neural Networks. Trying to balance between relevant and reliable accounting data, generally accepted accounting principles (GAAP) allow, to some extent, the company management to use their judgment and to make subjective assessments when preparing financial statements. The opportunistic use of the discretion in financial reporting is called earnings management. There have been a considerable number of suggestions of methods for detecting accrual based earnings management. A majority of these methods are based on linear regression. The problem with using linear regression is that a linear relationship between the dependent variable and the independent variables must be assumed. However, previous research has shown that the relationship between accruals and some of the explanatory variables, such as company performance, is non-linear. An alternative to linear regression, which can handle non-linear relationships, is neural networks. The type of neural network used in this study is the feed-forward back-propagation neural network. Three neural network-based models are compared with four commonly used linear regression-based earnings management detection models. All seven models are based on the earnings management detection model presented by Jones (1991). The performance of the models is assessed in three steps. First, a random data set of companies is used. Second, the discretionary accruals from the random data set are ranked according to six different variables. The discretionary accruals in the highest and lowest quartiles for these six variables are then compared. Third, a data set containing simulated earnings management is used. Both expense and revenue manipulation ranging between -5% and 5% of lagged total assets is simulated. Furthermore, two neural network-based models and two linear regression-based models are used with a data set containing financial statement data from 110 failed companies. Overall, the results show that the linear regression-based models, except for the model using a piecewise linear approach, produce biased estimates of discretionary accruals. The neural network-based model with the original Jones model variables and the neural network-based model augmented with ROA as an independent variable, however, perform well in all three steps. Especially in the second step, where the highest and lowest quartiles of ranked discretionary accruals are examined, the neural network-based model augmented with ROA as an independent variable outperforms the other models.
Resumo:
A dynamic model of the COREX melter gasifier is developed to study the transient behavior of the furnace. The effect of pulse disturbance and step disturbance on the process performance has been studied. This study shows that the effect of pulse disturbance decays asymptotically. The step change brings the system to a new steady state after a delay of about 5 hours. The dynamic behavior of the melter gasifier with respect to a shutdown/blow-on condition and the effect of tapping are also studied. The results show that the time response of the melter gasifier is much less than that of a blast furnace.
Resumo:
The recently discovered scalar resonance at the Large Hadron Collider is now almost confirmed to be a Higgs boson, whose CP properties are yet to be established. At the International Linear Collider with and without polarized beams, it may be possible to probe these properties at high precision. In this work, we study the possibility of probing departures from the pure CP-even case, by using the decay distributions in the process e(+)e(-) -> t (t) over bar Phi, with Phi mainly decaying into a b (b) over bar pair. We have compared the case of a minimal extension of the Standard Model case (model I) with an additional pseudoscalar degree of freedom, with a more realistic case namely the CP-violating two-Higgs doublet model (model II) that permits a more general description of the couplings. We have considered the International Linear Collider with root s = 800 GeV and integrated luminosity of 300 fb(-1). Our main findings are that even in the case of small departures from the CP-even case, the decay distributions are sensitive to the presence of a CP-odd component in model II, while it is difficult to probe these departures in model I unless the pseudoscalar component is very large. Noting that the proposed degrees of beam polarization increase the statistics, the process demonstrates the effective role of beam polarization in studies beyond the Standard Model. Further, our study shows that an indefinite CP Higgs would be a sensitive laboratory to physics beyond the Standard Model.
Resumo:
Damage-induced anisotropy of quasi-brittle materials is investigated using component assembling model in this study. Damage-induced anisotropy is one significant character of quasi-brittle materials coupled with nonlinearity and strain softening. Formulation of such complicated phenomena is a difficult problem till now. The present model is based on the component assembling concept, where constitutive equations of materials are formed by means of assembling two kinds of components' response functions. These two kinds of components, orientational and volumetric ones, are abstracted based on pair-functional potentials and the Cauchy - Born rule. Moreover, macroscopic damage of quasi-brittle materials can be reflected by stiffness changing of orientational components, which represent grouped atomic bonds along discrete directions. Simultaneously, anisotropic characters are captured by the naturally directional property of the orientational component. Initial damage surface in the axial-shear stress space is calculated and analyzed. Furthermore, the anisotropic quasi-brittle damage behaviors of concrete under uniaxial, proportional, and nonproportional combined loading are analyzed to elucidate the utility and limitations of the present damage model. The numerical results show good agreement with the experimental data and predicted results of the classical anisotropic damage models.
Resumo:
A mathematical model for the rain infiltration in the rock-soil slop has been established and solved by using the finite element method. The unsteady water infiltrating process has been simulated to get water content both in the homogeneous and heterogeneous media. The simulated results show that the rock blocks in the rock-soil slop can cause the wetting front moving fast. If the rain intensity is increased, the saturated region will be formed quickly while other conditions are the same. If the rain intensity keeps a constant, it is possible to accelerate the generation of the saturated region by properly increasing the vertical filtration rate of the rock-soil slop. However, if the vertical filtration rate is so far greater than the rain intensity, it will be difficult to form the saturated region in the rock-soil slop. The numerical method was verified by comparing the calculation results with the field test data.
Resumo:
Apresenta a proposta de um modelo de mapa do conhecimento, como ferramenta informacional em gestão de competências, aplicado à Câmara Legislativa do Distrito Federal - CLDF, para auxiliar no processo de governança legislativa. São abordados e discutidos os conceitos de administração pública gerencial; competência; competência nas organizações e alguns modelos, métodos e técnicas de gestão de competências. Apresenta, ainda, o mapeamento das áreas de competência existentes na CLDF, e a modelagem e classificação das competências por áreas. O modelo proposto envolveu a construção de um modelo de dados; de uma taxonomia institucional; de uma arquitetura da informação, com concepção do padrão institucional de metadados, do repositório da taxonomia e dos metadados; e a definição das unidades organizacionais responsáveis pelo gerenciamento do conteúdo e da operacionalização do sistema, com suas atribuições e responsabilidades. Por fim, recomenda a aplicação do modelo e a ampliação do estudo em instituições públicas e, particularmente, nas instituições do poder legislativo municipal, estadual e federal.
Resumo:
Damage-induced anisotropy of quasi-brittle materials is investigated using component assembling model in this study. Damage-induced anisotropy is one significant character of quasi-brittle materials coupled with nonlinearity and strain softening. Formulation of such complicated phenomena is a difficult problem till now. The present model is based on the component assembling concept, where constitutive equations of materials are formed by means of assembling two kinds of components' response functions. These two kinds of components, orientational and volumetric ones, are abstracted based on pair-functional potentials and the Cauchy - Born rule. Moreover, macroscopic damage of quasi-brittle materials can be reflected by stiffness changing of orientational components, which represent grouped atomic bonds along discrete directions. Simultaneously, anisotropic characters are captured by the naturally directional property of the orientational component. Initial damage surface in the axial-shear stress space is calculated and analyzed. Furthermore, the anisotropic quasi-brittle damage behaviors of concrete under uniaxial, proportional, and nonproportional combined loading are analyzed to elucidate the utility and limitations of the present damage model. The numerical results show good agreement with the experimental data and predicted results of the classical anisotropic damage models.
Resumo:
Long-term living resource monitoring programs are commonly conducted globally to evaluate trends and impacts of environmental change and management actions. For example, the Woods Hole bottom trawl survey has been conducted since 1963 providing critical information on the biology and distribution of finfish and shellfish in the North Atlantic (Despres-Patango et al. 1988). Similarly in the Chesapeake Bay, the Maryland Department of Natural Resources (MDNR) Summer Blue Crab Trawl survey has been conducted continuously since 1977 providing management-relevant information on the abundance of this important commercial and recreational species. A key component of monitoring program design is standardization of methods over time to allow for a continuous, unbiased data set. However, complete standardization is not always possible where multiple vessels, captains, and crews are required to cover large geographic areas (Tyson et al. 2006). Of equal issue is technological advancement of gear which serves to increase capture efficiency or ease of use. Thus, to maintain consistency and facilitate interpretation of reported data in long-term datasets, it is imperative to understand and quantify the impacts of changes in gear and vessels on catch per unit of effort (CPUE). While vessel changes are inevitable due to ageing fleets and other factors, gear changes often reflect a decision to exploit technological advances. A prime example of this is the otter trawl, a common tool for fisheries monitoring and research worldwide. Historically, trawl nets were constructed of natural materials such as cotton and linen. However modern net construction consists of synthetic materials such as polyamide, polyester, polyethylene, and polypropylene (Nielson et. al. 1983). Over the past several decades, polyamide materials which will be referred to as nylon, has been a standard material used in otter trawl construction. These trawls are typically dipped into a latex coating for increased abrasion resistance, a process that is referred to as “green dipped.” More recently, polyethylene netting has become popular among living resource monitoring agencies. Polyethylene netting, commonly known as sapphire netting, consists of braided filaments that form a very durable material more resistant to abrasion than nylon. Additionally, sapphire netting allows for stronger knot strength during construction of the net further increasing the net’s durability and longevity. Also, sapphire absorbs less water with a specific gravity near 0.91 allowing the material to float as compared to nylon with specific gravity of 1.14 (Nielson et. al. 1983). This same property results in a light weight net which is more efficient in deployment, retrieval and fishing of the net, particularly when towing from small vessels. While there are many advantages to the sapphire netting, no comparative efficiency data is available for these two trawl net types. Traditional nylon netting has been used consistently for decades by the MDDNR to generate long term living resource data sets of great value. However, there is much interest in switching to the advanced materials. In addition, recent collaborative efforts between MDNR and NOAA’s Cooperative Oxford Laboratory (NOAA-COL) require using different vessels for trawling in support of joint projects. In order to continue collaborative programs, or change to more innovative netting materials, the influence of these changes must be demonstrated to be negligible or correction factors determined. Thus, the objective of this study was to examine the influence of trawl net type, vessel type, and their interaction on capture efficiency.
Resumo:
Purpose - The purpose of this paper is to describe two related fields - knowledge management (KM) and capability maturity model integrated (CMMISM) and highlight their imilarities. Design/methodology/approach - The KM framework used for this comparison is the one established and used at Israel Aircraft Industries, while the CMMISM source of information is none but the original document produced by the CMMISM product team at the Carnegie Mellon University, as well as papers published on the subject. Findings - Knowledge management is a rather young discipline promising to maximize innovation and competitive advantage to organizations that practice knowledge capture, documentation, retrieval and reuse, creation, transfer and share to its knowledge assets in a measurable way, integrated in its operational and business processes. The capability maturity model integrated deals with the ways an organization has to follow, in order to maintain well mapped processes, having well defined stages, because of the assumption that in mature organizations, it is possible to measure and relate between the quality of the process and the quality of the product. Though KM and CMMISM take different approaches to the achievement of competitive advantage, they seem to be supporting as well as dependent of each other. Originality/value - Practitioners as well as researchers in the field of knowledge management and in the implementation of the CMMISM standard will find comfort in realizing how mutually supportive are these two fields. © Emerald Group Publishing Limited.
Resumo:
BACKGROUND: Neuronal migration, the process by which neurons migrate from their place of origin to their final position in the brain, is a central process for normal brain development and function. Advances in experimental techniques have revealed much about many of the molecular components involved in this process. Notwithstanding these advances, how the molecular machinery works together to govern the migration process has yet to be fully understood. Here we present a computational model of neuronal migration, in which four key molecular entities, Lis1, DCX, Reelin and GABA, form a molecular program that mediates the migration process. RESULTS: The model simulated the dynamic migration process, consistent with in-vivo observations of morphological, cellular and population-level phenomena. Specifically, the model reproduced migration phases, cellular dynamics and population distributions that concur with experimental observations in normal neuronal development. We tested the model under reduced activity of Lis1 and DCX and found an aberrant development similar to observations in Lis1 and DCX silencing expression experiments. Analysis of the model gave rise to unforeseen insights that could guide future experimental study. Specifically: (1) the model revealed the possibility that under conditions of Lis1 reduced expression, neurons experience an oscillatory neuron-glial association prior to the multipolar stage; and (2) we hypothesized that observed morphology variations in rats and mice may be explained by a single difference in the way that Lis1 and DCX stimulate bipolar motility. From this we make the following predictions: (1) under reduced Lis1 and enhanced DCX expression, we predict a reduced bipolar migration in rats, and (2) under enhanced DCX expression in mice we predict a normal or a higher bipolar migration. CONCLUSIONS: We present here a system-wide computational model of neuronal migration that integrates theory and data within a precise, testable framework. Our model accounts for a range of observable behaviors and affords a computational framework to study aspects of neuronal migration as a complex process that is driven by a relatively simple molecular program. Analysis of the model generated new hypotheses and yet unobserved phenomena that may guide future experimental studies. This paper thus reports a first step toward a comprehensive in-silico model of neuronal migration.
Resumo:
Performance measurement and management (PMM) is a management and research paradox. On one hand, it provides management with many critical, useful, and needed functions. Yet, there is evidence that it can adversely affect performance. This paper attempts to resolve this paradox by focusing on the issue of "fit". That is, in today's dynamic and turbulent environment, changes in either the business environment or the business strategy can lead to the need for new or revised measures and metrics. Yet, if these measures and metrics are either not revised or incorrectly revised, then we can encounter situations where what the firm wants to achieve (as communicated by its strategy) and what the firm measures and rewards are not synchronised with each other (i.e., there is a lack of "fit"). This situation can adversely affect the ability of the firm to compete. The issue of fit is explored using a three phase Delphi approach. Initially intended to resolve this first paradox, the Delphi study identified another paradox - one in which the researchers found that in a dynamic environment, firms do revise their strategies, yet, often the PMM system is not changed. To resolve this second paradox, the paper proposes a new framework - one that shows that under certain conditions, the observed metrics "lag" is not only explainable but also desirable. The findings suggest a need to recast the accepted relationship between strategy and PMM system and the output included the Performance Alignment Matrix that had utility for managers. © 2013 .
Resumo:
中国计算机学会