853 resultados para Empirical Methods in NLP


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Some basic points from the automated creation of a Bulgarian WordNet – an analogue of the Princeton WordNet, are treated. The used computer tools, the received results and their estimation are discussed. A side effect from the proposed approach is the receiving of patterns for the Bulgarian syntactic analyzer.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Providing effective IT support for business processes has become crucial for enterprises to stay competitive. In response to this need numerous process support paradigms (e.g., workflow management, service flow management, case handling), process specification standards (e.g., WS-BPEL, BPML, BPMN), process tools (e.g., ARIS Toolset, Tibco Staffware, FLOWer), and supporting methods have emerged in recent years. Summarized under the term “Business Process Management” (BPM), these paradigms, standards, tools, and methods have become a success-critical instrument for improving process performance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Empirical research in business process management(BPM) is coming of age. In 2009, when the inaugural ER-BPM workshop was held, the field of BPM research was characterized by a strong emphasis on solution development, but also by an increasing demand for insights or evaluations of BPM technology based on dedicated empirical research strategies. The ER-BPM workshop series was created to provide an international forum for researchers to discuss and present such research.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Informal learning networks play a key role in the skill and professional development of professionals working in micro-businesses within Australia’s digital content industry as they do not necessarily have access to a learning and development or a human resources section that can assist in mapping their learning pathway. Professionals working in this environment would typically adopt an informal learning approach to their skill and professional development by utilising their social and business networks. The overall aim of this PhD research project is to study how these professionals manage their skill and professional development, and to explore what role informal learning networks play in this professional learning context. This paper will describe the theme of the research project and how it fits with previous research and other relevant studies. Secondly, it will present the study’s research focus, and the research questions. It will also present relevant theories and perspectives, and the methods for empirical data collection. Data collection will be through three distinct phases using a mixed methods research design: an online survey, interviews, and case studies. It should be noted the findings presented in this paper offer some early results of the research project.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Modern-day weather forecasting is highly dependent on Numerical Weather Prediction (NWP) models as the main data source. The evolving state of the atmosphere with time can be numerically predicted by solving a set of hydrodynamic equations, if the initial state is known. However, such a modelling approach always contains approximations that by and large depend on the purpose of use and resolution of the models. Present-day NWP systems operate with horizontal model resolutions in the range from about 40 km to 10 km. Recently, the aim has been to reach operationally to scales of 1 4 km. This requires less approximations in the model equations, more complex treatment of physical processes and, furthermore, more computing power. This thesis concentrates on the physical parameterization methods used in high-resolution NWP models. The main emphasis is on the validation of the grid-size-dependent convection parameterization in the High Resolution Limited Area Model (HIRLAM) and on a comprehensive intercomparison of radiative-flux parameterizations. In addition, the problems related to wind prediction near the coastline are addressed with high-resolution meso-scale models. The grid-size-dependent convection parameterization is clearly beneficial for NWP models operating with a dense grid. Results show that the current convection scheme in HIRLAM is still applicable down to a 5.6 km grid size. However, with further improved model resolution, the tendency of the model to overestimate strong precipitation intensities increases in all the experiment runs. For the clear-sky longwave radiation parameterization, schemes used in NWP-models provide much better results in comparison with simple empirical schemes. On the other hand, for the shortwave part of the spectrum, the empirical schemes are more competitive for producing fairly accurate surface fluxes. Overall, even the complex radiation parameterization schemes used in NWP-models seem to be slightly too transparent for both long- and shortwave radiation in clear-sky conditions. For cloudy conditions, simple cloud correction functions are tested. In case of longwave radiation, the empirical cloud correction methods provide rather accurate results, whereas for shortwave radiation the benefit is only marginal. Idealised high-resolution two-dimensional meso-scale model experiments suggest that the reason for the observed formation of the afternoon low level jet (LLJ) over the Gulf of Finland is an inertial oscillation mechanism, when the large-scale flow is from the south-east or west directions. The LLJ is further enhanced by the sea-breeze circulation. A three-dimensional HIRLAM experiment, with a 7.7 km grid size, is able to generate a similar LLJ flow structure as suggested by the 2D-experiments and observations. It is also pointed out that improved model resolution does not necessary lead to better wind forecasts in the statistical sense. In nested systems, the quality of the large-scale host model is really important, especially if the inner meso-scale model domain is small.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The intent of this study is to provide formal apparatus which facilitates the investigation of problems in the methodology of science. The introduction contains several examples of such problems and motivates the subsequent formalism.

A general definition of a formal language is presented, and this definition is used to characterize an individual’s view of the world around him. A notion of empirical observation is developed which is independent of language. The interplay of formal language and observation is taken as the central theme. The process of science is conceived as the finding of that formal language that best expresses the available experimental evidence.

To characterize the manner in which a formal language imposes structure on its universe of discourse, the fundamental concepts of elements and states of a formal language are introduced. Using these, the notion of a basis for a formal language is developed as a collection of minimal states distinguishable within the language. The relation of these concepts to those of model theory is discussed.

An a priori probability defined on sets of observations is postulated as a reflection of an individual’s ontology. This probability, in conjunction with a formal language and a basis for that language, induces a subjective probability describing an individual’s conceptual view of admissible configurations of the universe. As a function of this subjective probability, and consequently of language, a measure of the informativeness of empirical observations is introduced and is shown to be intuitively plausible – particularly in the case of scientific experimentation.

The developed formalism is then systematically applied to the general problems presented in the introduction. The relationship of scientific theories to empirical observations is discussed and the need for certain tacit, unstatable knowledge is shown to be necessary to fully comprehend the meaning of realistic theories. The idea that many common concepts can be specified only by drawing on knowledge obtained from an infinite number of observations is presented, and the problems of reductionism are examined in this context.

A definition of when one formal language can be considered to be more expressive than another is presented, and the change in the informativeness of an observation as language changes is investigated. In this regard it is shown that the information inherent in an observation may decrease for a more expressive language.

The general problem of induction and its relation to the scientific method are discussed. Two hypotheses concerning an individual’s selection of an optimal language for a particular domain of discourse are presented and specific examples from the introduction are examined.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The present study proposed the semi-empirical methods for determining the efflux velocity from a ship's propeller. Ryan [1] defined the efflux velocity as the maximum velocity taken from a time-averaged velocity distribution along the initial propeller plane. The Laser Doppler Anemometry (LDA) and Computational Fluid Dynamics (CFD) were used to acquire the efflux velocity from the two propellers with different geometrical characteristics. The LDA and CFD results were compared in order to investigate the equation derived from the axial momentum theory. The study confirmed the validation of the axial momentum theory and its linear relationship between the efflux velocity and the multiplication of the rotational speed, propeller diameter and the square root of thrust coefficient. The linear relationship of these two terms is connected by an efflux coefficient and the value of this efflux coefficient reduced when the blade number increased.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Cattle feed industry is a major segment of animal feed industry. This industry is gradually evolving into an organized sector and the feed manufactures are increasingly using modern and sophisticated methods that seek to incorporate best global practices. This industry has got high potential for growth in India, given the fact that the country is the world’s leading producer of milk and its production is expected to grow at a compounded annual growth rate of 4 per cent. Besides, the concept of branded cattle feed as a packaged commodity is fast gaining popularity in rural India. There can be a positive change in the demand for cattle feed because of factors like (i) shrinkage of open land for cattle grazing, urbanization and resultant shortage of conventionally used cattle feeds, and (ii) introduction of high yield cattle requires specialized feeds. Earlier research studies done by the present authors have revealed the significant growth prospects of the branded cattle feed industry, the feed consumption pattern and the relatively high share of branded feeds, feed consumption pattern based on product types (like, pellet and mash), composition of cattle feed market and the relatively large shares of Kerala Feeds Ltd. (KFL) and Kerala Solvent Extractions Ltd. (KSE) brands, the major factors influencing the purchasing decisions etc. As a continuation of the earlier studies, this study makes a closer look into the significance of product types in the buyer behavior, level of awareness about the brand and its implications on purchasing decisions, and the brandshifting behavior and its determinants

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objectives To compare the use of pair-wise meta-analysis methods to multiple treatment comparison (MTC) methods for evidence-based health-care evaluation to estimate the effectiveness and cost-effectiveness of alternative health-care interventions based on the available evidence. Methods Pair-wise meta-analysis and more complex evidence syntheses, incorporating an MTC component, are applied to three examples: 1) clinical effectiveness of interventions for preventing strokes in people with atrial fibrillation; 2) clinical and cost-effectiveness of using drug-eluting stents in percutaneous coronary intervention in patients with coronary artery disease; and 3) clinical and cost-effectiveness of using neuraminidase inhibitors in the treatment of influenza. We compare the two synthesis approaches with respect to the assumptions made, empirical estimates produced, and conclusions drawn. Results The difference between point estimates of effectiveness produced by the pair-wise and MTC approaches was generally unpredictable—sometimes agreeing closely whereas in other instances differing considerably. In all three examples, the MTC approach allowed the inclusion of randomized controlled trial evidence ignored in the pair-wise meta-analysis approach. This generally increased the precision of the effectiveness estimates from the MTC model. Conclusions The MTC approach to synthesis allows the evidence base on clinical effectiveness to be treated as a coherent whole, include more data, and sometimes relax the assumptions made in the pair-wise approaches. However, MTC models are necessarily more complex than those developed for pair-wise meta-analysis and thus could be seen as less transparent. Therefore, it is important that model details and the assumptions made are carefully reported alongside the results.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Complex diseases such as cancer result from multiple genetic changes and environmental exposures. Due to the rapid development of genotyping and sequencing technologies, we are now able to more accurately assess causal effects of many genetic and environmental factors. Genome-wide association studies have been able to localize many causal genetic variants predisposing to certain diseases. However, these studies only explain a small portion of variations in the heritability of diseases. More advanced statistical models are urgently needed to identify and characterize some additional genetic and environmental factors and their interactions, which will enable us to better understand the causes of complex diseases. In the past decade, thanks to the increasing computational capabilities and novel statistical developments, Bayesian methods have been widely applied in the genetics/genomics researches and demonstrating superiority over some regular approaches in certain research areas. Gene-environment and gene-gene interaction studies are among the areas where Bayesian methods may fully exert its functionalities and advantages. This dissertation focuses on developing new Bayesian statistical methods for data analysis with complex gene-environment and gene-gene interactions, as well as extending some existing methods for gene-environment interactions to other related areas. It includes three sections: (1) Deriving the Bayesian variable selection framework for the hierarchical gene-environment and gene-gene interactions; (2) Developing the Bayesian Natural and Orthogonal Interaction (NOIA) models for gene-environment interactions; and (3) extending the applications of two Bayesian statistical methods which were developed for gene-environment interaction studies, to other related types of studies such as adaptive borrowing historical data. We propose a Bayesian hierarchical mixture model framework that allows us to investigate the genetic and environmental effects, gene by gene interactions (epistasis) and gene by environment interactions in the same model. It is well known that, in many practical situations, there exists a natural hierarchical structure between the main effects and interactions in the linear model. Here we propose a model that incorporates this hierarchical structure into the Bayesian mixture model, such that the irrelevant interaction effects can be removed more efficiently, resulting in more robust, parsimonious and powerful models. We evaluate both of the 'strong hierarchical' and 'weak hierarchical' models, which specify that both or one of the main effects between interacting factors must be present for the interactions to be included in the model. The extensive simulation results show that the proposed strong and weak hierarchical mixture models control the proportion of false positive discoveries and yield a powerful approach to identify the predisposing main effects and interactions in the studies with complex gene-environment and gene-gene interactions. We also compare these two models with the 'independent' model that does not impose this hierarchical constraint and observe their superior performances in most of the considered situations. The proposed models are implemented in the real data analysis of gene and environment interactions in the cases of lung cancer and cutaneous melanoma case-control studies. The Bayesian statistical models enjoy the properties of being allowed to incorporate useful prior information in the modeling process. Moreover, the Bayesian mixture model outperforms the multivariate logistic model in terms of the performances on the parameter estimation and variable selection in most cases. Our proposed models hold the hierarchical constraints, that further improve the Bayesian mixture model by reducing the proportion of false positive findings among the identified interactions and successfully identifying the reported associations. This is practically appealing for the study of investigating the causal factors from a moderate number of candidate genetic and environmental factors along with a relatively large number of interactions. The natural and orthogonal interaction (NOIA) models of genetic effects have previously been developed to provide an analysis framework, by which the estimates of effects for a quantitative trait are statistically orthogonal regardless of the existence of Hardy-Weinberg Equilibrium (HWE) within loci. Ma et al. (2012) recently developed a NOIA model for the gene-environment interaction studies and have shown the advantages of using the model for detecting the true main effects and interactions, compared with the usual functional model. In this project, we propose a novel Bayesian statistical model that combines the Bayesian hierarchical mixture model with the NOIA statistical model and the usual functional model. The proposed Bayesian NOIA model demonstrates more power at detecting the non-null effects with higher marginal posterior probabilities. Also, we review two Bayesian statistical models (Bayesian empirical shrinkage-type estimator and Bayesian model averaging), which were developed for the gene-environment interaction studies. Inspired by these Bayesian models, we develop two novel statistical methods that are able to handle the related problems such as borrowing data from historical studies. The proposed methods are analogous to the methods for the gene-environment interactions on behalf of the success on balancing the statistical efficiency and bias in a unified model. By extensive simulation studies, we compare the operating characteristics of the proposed models with the existing models including the hierarchical meta-analysis model. The results show that the proposed approaches adaptively borrow the historical data in a data-driven way. These novel models may have a broad range of statistical applications in both of genetic/genomic and clinical studies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The evolution of cognitive neuroscience has been spurred by the development of increasingly sophisticated investigative techniques to study human cognition. In Methods in Mind, experts examine the wide variety of tools available to cognitive neuroscientists, paying particular attention to the ways in which different methods can be integrated to strengthen empirical findings and how innovative uses for established techniques can be developed. The book will be a uniquely valuable resource for the researcher seeking to expand his or her repertoire of investigative techniques. Each chapter explores a different approach. These include transcranial magnetic stimulation, cognitive neuropsychiatry, lesion studies in nonhuman primates, computational modeling, psychophysiology, single neurons and primate behavior, grid computing, eye movements, fMRI, electroencephalography, imaging genetics, magnetoencephalography, neuropharmacology, and neuroendocrinology. As mandated, authors focus on convergence and innovation in their fields; chapters highlight such cross-method innovations as the use of the fMRI signal to constrain magnetoencephalography, the use of electroencephalography (EEG) to guide rapid transcranial magnetic stimulation at a specific frequency, and the successful integration of neuroimaging and genetic analysis. Computational approaches depend on increased computing power, and one chapter describes the use of distributed or grid computing to analyze massive datasets in cyberspace. Each chapter author is a leading authority in the technique discussed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The purpose of this study was to investigate empirically the role of knowledge and innovation within Central and Eastern Europe’s changing economy. We applied qualitative research methods, and focused only on professional services firms within the region. The connection between knowledge and innovation as well as knowledge and competitiveness was analyzed by top managers and senior industry experts. Our findings revealed that knowledge might be a real value driver for professional services firms. These companies can significantly contribute to the development of modern economies through the dissemination of their internal best practices in knowledge management. We found three factors that might influence the effectiveness of knowledge management. These three factors are the involvement of international knowledge networks, the investments in human capital, and focus on critical resources. These issues proved to be essential to maximize the potential of knowledge and to leverage this into increased business performance.