888 resultados para nonlocal theories and models


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The research described here concerns the development of metrics and models to support the development of hybrid (conventional/knowledge based) integrated systems. The thesis argues from the point that, although it is well known that estimating the cost, duration and quality of information systems is a difficult task, it is far from clear what sorts of tools and techniques would adequately support a project manager in the estimation of these properties. A literature review shows that metrics (measurements) and estimating tools have been developed for conventional systems since the 1960s while there has been very little research on metrics for knowledge based systems (KBSs). Furthermore, although there are a number of theoretical problems with many of the `classic' metrics developed for conventional systems, it also appears that the tools which such metrics can be used to develop are not widely used by project managers. A survey was carried out of large UK companies which confirmed this continuing state of affairs. Before any useful tools could be developed, therefore, it was important to find out why project managers were not using these tools already. By characterising those companies that use software cost estimating (SCE) tools against those which could but do not, it was possible to recognise the involvement of the client/customer in the process of estimation. Pursuing this point, a model of the early estimating and planning stages (the EEPS model) was developed to test exactly where estimating takes place. The EEPS model suggests that estimating could take place either before a fully-developed plan has been produced, or while this plan is being produced. If it were the former, then SCE tools would be particularly useful since there is very little other data available from which to produce an estimate. A second survey, however, indicated that project managers see estimating as being essentially the latter at which point project management tools are available to support the process. It would seem, therefore, that SCE tools are not being used because project management tools are being used instead. The issue here is not with the method of developing an estimating model or tool, but; in the way in which "an estimate" is intimately tied to an understanding of what tasks are being planned. Current SCE tools are perceived by project managers as targetting the wrong point of estimation, A model (called TABATHA) is then presented which describes how an estimating tool based on an analysis of tasks would thus fit into the planning stage. The issue of whether metrics can be usefully developed for hybrid systems (which also contain KBS components) is tested by extending a number of "classic" program size and structure metrics to a KBS language, Prolog. Measurements of lines of code, Halstead's operators/operands, McCabe's cyclomatic complexity, Henry & Kafura's data flow fan-in/out and post-release reported errors were taken for a set of 80 commercially-developed LPA Prolog programs: By re~defining the metric counts for Prolog it was found that estimates of program size and error-proneness comparable to the best conventional studies are possible. This suggests that metrics can be usefully applied to KBS languages, such as Prolog and thus, the development of metncs and models to support the development of hybrid information systems is both feasible and useful.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

As a discipline, supply chain management (SCM) has traditionally been primarily concerned with the procurement, processing, movement and sale of physical goods. However an important class of products has emerged - digital products - which cannot be described as physical as they do not obey commonly understood physical laws. They do not possess mass or volume, and they require no energy in their manufacture or distribution. With the Internet, they can be distributed at speeds unimaginable in the physical world, and every copy produced is a 100% perfect duplicate of the original version. Furthermore, the ease with which digital products can be replicated has few analogues in the physical world. This paper assesses the effect of non-physicality on one such product – software – in relation to the practice of SCM. It explores the challenges that arise when managing the software supply chain and how practitioners are addressing these challenges. Using a two-pronged exploratory approach that examines the literature around software management as well as direct interviews with software distribution practitioners, a number of key challenges associated with software supply chains are uncovered, along with responses to these challenges. This paper proposes a new model for software supply chains that takes into account the non-physicality of the product being delivered. Central to this model is the replacement of physical flows with flows of intellectual property, the growing importance of innovation over duplication and the increased centrality of the customer in the entire process. Hybrid physical / digital supply chains are discussed and a framework for practitioners concerned with software supply chains is presented.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper critically evaluates the paradigm, theory, and methodology that dominate research on related party transactions (RPTs). RPTs have been debated in the literature whether they are a facet of conflict of interest between major and minor shareholders or they are normal efficient transactions that help the firms to achieve asset utilization. Literature has been widely interested in studying the association between corporate governance and RPTs especially that according to the agency theory it is assumed that corporate governance as a monitoring tool should impede the negative consequences of RPTs and ensure they are conducted to achieve better asset utilization.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

AMS Subj. Classification: 47J10, 47H30, 47H10

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Jelen tanulmány a posztmodern kor fogyasztási tendenciáit és a posztmodern marketing sajátos fejlődését elemzi, elsősorban a turizmus példáján. A szerzők a hazai és a nemzetközi szakirodalom, illetve saját kutatásaik és megfigyeléseik alapján ütköztetik az ismert és elfogadott elveket, elméleteket a gyakorlattal, és felhívják a figyelmet a marketingtevékenység alkalmazkodásának hazai problémáira. A Vezetéstudomány című folyóirat 2008/9. számában rendkívül érdekes tanulmány jelent meg Mitev Ariel Zoltán és Horváth Dóra tollából „A posztmodern marketing rózsaszirmai” címmel. A tanulmány előremutató, érdekfeszítő és minden tekintetben konstruktív, újszerű. Jelen tanulmány szerzőire is nagy hatást gyakorolt a cikk, nagyrészt felsorolt erényei miatt, de egyes esetekben kiegészítést kívánva. Mindenképpen inspirálta a továbblépést, az újabb adalékok megfogalmazását, amire ezúton e tanulmány szerzői kísérletet tettek. A cikk egyben szerves gondolati folytatása a szerzőpáros korábbi közös publikációinak, elsősorban a Marketing & Menedzsment folyóiratban megjelent cikknek. _______ In this article the author will analyze consumption tendencies of post-modern age, mainly using tourism marketing examples. Their analysis has been based on results of their own researches and researches published in Hungarian and international marketing literature. In this article they try to confront different theories of post-modern marketing and they will analyze problems of applicability of these theories in Hungarian marketing problem solving. An extremely interesting article was published in Vezetéstudomány (2008/9), written by Zoltán Mitev Ariel and Dóra Horváth, and this article, by its interesting, innovative and constructive aspect has largely influenced authors of present article to continue the path proposed in the abovementioned article. The article, in the same time, is an organic continuation of the earlier common publications of the authors, e.g. the recent article in Marketing & Menedzsment journal.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study investigated the use of treatment theories and procedures for postural control training used by Occupational Therapists (OTs) when working with hemiplegic adults who have had cerebrovascular accident (CVA) or traumatic brain injury (TBI). The method of data collection was a national survey of 400 randomly selected physical disability OTs with 127 usable surveys returned. Results showed that the most common used treatment theory was neurodevelopmental treatment (NDT), followed by motor relearning program (MRP), proprioceptive neuromuscular facilitation (PNF), Brunnstrom's approach, and the approach of Rood. The most common treatment posture used was sitting, followed by standing, mat activity, equilibrium reaction training, and walking. The factors affecting the use of various treatment theories procedures were years certified, years of clinical experience, work situation and work status. Pearson correlation coefficient analyses found significant positive relationships between treatment theories and postures. There were significant high correlations between usage of all pairs of treatment procedures. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We consider SU(3)-equivariant dimensional reduction of Yang Mills theory over certain cyclic orbifolds of the 5-sphere which are Sasaki-Einstein manifolds. We obtain new quiver gauge theories extending those induced via reduction over the leaf spaces of the characteristic foliation of the Sasaki-Einstein structure, which are projective planes. We describe the Higgs branches of these quiver gauge theories as moduli spaces of spherically symmetric instantons which are SU(3)-equivariant solutions to the Hermitian Yang-Mills equations on the associated Calabi-Yau cones, and further compare them to moduli spaces of translationally-invariant instantons on the cones. We provide an explicit unified construction of these moduli spaces as Kahler quotients and show that they have the same cyclic orbifold singularities as the cones over the lens 5-spaces. (C) 2015 The Authors. Published by Elsevier B.V.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Visual recognition is a fundamental research topic in computer vision. This dissertation explores datasets, features, learning, and models used for visual recognition. In order to train visual models and evaluate different recognition algorithms, this dissertation develops an approach to collect object image datasets on web pages using an analysis of text around the image and of image appearance. This method exploits established online knowledge resources (Wikipedia pages for text; Flickr and Caltech data sets for images). The resources provide rich text and object appearance information. This dissertation describes results on two datasets. The first is Berg’s collection of 10 animal categories; on this dataset, we significantly outperform previous approaches. On an additional set of 5 categories, experimental results show the effectiveness of the method. Images are represented as features for visual recognition. This dissertation introduces a text-based image feature and demonstrates that it consistently improves performance on hard object classification problems. The feature is built using an auxiliary dataset of images annotated with tags, downloaded from the Internet. Image tags are noisy. The method obtains the text features of an unannotated image from the tags of its k-nearest neighbors in this auxiliary collection. A visual classifier presented with an object viewed under novel circumstances (say, a new viewing direction) must rely on its visual examples. This text feature may not change, because the auxiliary dataset likely contains a similar picture. While the tags associated with images are noisy, they are more stable when appearance changes. The performance of this feature is tested using PASCAL VOC 2006 and 2007 datasets. This feature performs well; it consistently improves the performance of visual object classifiers, and is particularly effective when the training dataset is small. With more and more collected training data, computational cost becomes a bottleneck, especially when training sophisticated classifiers such as kernelized SVM. This dissertation proposes a fast training algorithm called Stochastic Intersection Kernel Machine (SIKMA). This proposed training method will be useful for many vision problems, as it can produce a kernel classifier that is more accurate than a linear classifier, and can be trained on tens of thousands of examples in two minutes. It processes training examples one by one in a sequence, so memory cost is no longer the bottleneck to process large scale datasets. This dissertation applies this approach to train classifiers of Flickr groups with many group training examples. The resulting Flickr group prediction scores can be used to measure image similarity between two images. Experimental results on the Corel dataset and a PASCAL VOC dataset show the learned Flickr features perform better on image matching, retrieval, and classification than conventional visual features. Visual models are usually trained to best separate positive and negative training examples. However, when recognizing a large number of object categories, there may not be enough training examples for most objects, due to the intrinsic long-tailed distribution of objects in the real world. This dissertation proposes an approach to use comparative object similarity. The key insight is that, given a set of object categories which are similar and a set of categories which are dissimilar, a good object model should respond more strongly to examples from similar categories than to examples from dissimilar categories. This dissertation develops a regularized kernel machine algorithm to use this category dependent similarity regularization. Experiments on hundreds of categories show that our method can make significant improvement for categories with few or even no positive examples.