955 resultados para Mean-value solution
Resumo:
Purpose – Traditionally, most studies focus on institutionalized management-driven actors to understand technology management innovation. The purpose of this paper is to argue that there is a need for research to study the nature and role of dissident non-institutionalized actors’ (i.e. outsourced web designers and rapid application software developers). The authors propose that through online social knowledge sharing, non-institutionalized actors’ solution-finding tensions enable technology management innovation. Design/methodology/approach – A synthesis of the literature and an analysis of the data (21 interviews) provided insights in three areas of solution-finding tensions enabling management innovation. The authors frame the analysis on the peripherally deviant work and the nature of the ways that dissident non-institutionalized actors deviate from their clients (understood as the firm) original contracted objectives. Findings – The findings provide insights into the productive role of solution-finding tensions in enabling opportunities for management service innovation. Furthermore, deviant practices that leverage non-institutionalized actors’ online social knowledge to fulfill customers’ requirements are not interpreted negatively, but as a positive willingness to proactively explore alternative paths. Research limitations/implications – The findings demonstrate the importance of dissident non-institutionalized actors in technology management innovation. However, this work is based on a single country (USA) and additional research is needed to validate and generalize the findings in other cultural and institutional settings. Originality/value – This paper provides new insights into the perceptions of dissident non-institutionalized actors in the practice of IT managerial decision making. The work departs from, but also extends, the previous literature, demonstrating that peripherally deviant work in solution-finding practice creates tensions, enabling management innovation between IT providers and users.
Resumo:
2010 Mathematics Subject Classification: 35A23, 35B51, 35J96, 35P30, 47J20, 52A40.
Resumo:
Cikkünk arról a paradox jelenségről szól, hogy a fogyasztást explicit módon megjelenítő Neumann-modell egyensúlyi megoldásaiban a munkabért meghatározó létszükségleti termékek ára esetenként nulla lehet, és emiatt a reálbér egyensúlyi értéke is nulla lesz. Ez a jelenség mindig bekövetkezik az olyan dekomponálható gazdaságok esetén, amelyekben eltérő növekedési és profitrátájú, alternatív egyensúlyi megoldások léteznek. A jelenség sokkal áttekinthetőbb formában tárgyalható a modell Leontief-eljárásra épülő egyszerűbb változatában is, amit ki is használunk. Megmutatjuk, hogy a legnagyobbnál alacsonyabb szintű növekedési tényezőjű megoldások közgazdasági szempontból értelmetlenek, és így érdektelenek. Ezzel voltaképpen egyrészt azt mutatjuk meg, hogy Neumann kiváló intuíciója jól működött, amikor ragaszkodott modellje egyértelmű megoldásához, másrészt pedig azt is, hogy ehhez nincs szükség a gazdaság dekomponálhatóságának feltételezésére. A vizsgált téma szorosan kapcsolódik az általános profitráta meghatározásának - Sraffa által modern formába öntött - Ricardo-féle elemzéséhez, illetve a neoklasszikus növekedéselmélet nevezetes bér-profit, illetve felhalmozás-fogyasztás átváltási határgörbéihez, ami jelzi a téma elméleti és elmélettörténeti érdekességét is. / === / In the Marx-Neumann version of the Neumann model introduced by Morishima, the use of commodities is split between production and consumption, and wages are determined as the cost of necessary consumption. In such a version it may occur that the equilibrium prices of all goods necessary for consumption are zero, so that the equilibrium wage rate becomes zero too. In fact such a paradoxical case will always arise when the economy is decomposable and the equilibrium not unique in terms of growth and interest rate. It can be shown that a zero equilibrium wage rate will appear in all equilibrium solutions where growth and interest rate are less than maximal. This is another proof of Neumann's genius and intuition, for he arrived at the uniqueness of equilibrium via an assumption that implied that the economy was indecomposable, a condition relaxed later by Kemeny, Morgenstern and Thompson. This situation occurs also in similar models based on Leontief technology and such versions of the Marx-Neumann model make the roots of the problem more apparent. Analysis of them also yields an interesting corollary to Ricardo's corn rate of profit: the real cause of the awkwardness is bad specification of the model: luxury commodities are introduced without there being a final demand for them, and production of them becomes a waste of resources. Bad model specification shows up as a consumption coefficient incompatible with the given technology in the more general model with joint production and technological choice. For the paradoxical situation implies the level of consumption could be raised and/or the intensity of labour diminished without lowering the equilibrium rate of the growth and interest. This entails wasteful use of resources and indicates again that the equilibrium conditions are improperly specified. It is shown that the conditions for equilibrium can and should be redefined for the Marx-Neumann model without assuming an indecomposable economy, in a way that ensures the existence of an equilibrium unique in terms of the growth and interest rate coupled with a positive value for the wage rate, so confirming Neumann's intuition. The proposed solution relates closely to findings of Bromek in a paper correcting Morishima's generalization of wage/profit and consumption/investment frontiers.
Resumo:
In this paper cost sharing problems are considered. We focus on problems given by rooted trees, we call these problems cost-tree problems, and on the induced transferable utility cooperative games, called irrigation games. A formal notion of irrigation games is introduced, and the characterization of the class of these games is provided. The well-known class of airport games Littlechild and Thompson (1977) is a subclass of irrigation games. The Shapley value Shapley (1953) is probably the most popular solution concept for transferable utility cooperative games. Dubey (1982) and Moulin and Shenker (1992) show respectively, that Shapley's Shapley (1953) and Young (1985)'s axiomatizations of the Shapley value are valid on the class of airport games. In this paper we show that Dubey (1982)'s and Moulin and Shenker (1992)'s results can be proved by applying Shapley (1953)'s and Young (1985)'s proofs, that is those results are direct consequences of Shapley (1953)'s and Young (1985)'s results. Furthermore, we extend Dubey (1982)'s and Moulin and Shenker (1992)'s results to the class of irrigation games, that is we provide two characterizations of the Shapley value for cost sharing problems given by rooted trees. We also note that for irrigation games the Shapley value is always stable, that is it is always in the core Gillies (1959).
Resumo:
We consider the problem of axiomatizing the Shapley value on the class of assignment games. We first show that several axiomatizations of the Shapley value on the class of all TU-games do not characterize this solution on the class of assignment games by providing alternative solutions that satisfy these axioms. However, when considering an assignment game as a communication graph game where the game is simply the assignment game and the graph is a corresponding bipartite graph buyers are connected with sellers only, we show that Myerson's component efficiency and fairness axioms do characterize the Shapley value on the class of assignment games. Moreover, these two axioms have a natural interpretation for assignment games. Component efficiency yields submarket efficiency stating that the sum of the payoffs of all players in a submarket equals the worth of that submarket, where a submarket is a set of buyers and sellers such that all buyers in this set have zero valuation for the goods offered by the sellers outside the set, and all buyers outside the set have zero valuations for the goods offered by sellers inside the set. Fairness of the graph game solution boils down to valuation fairness stating that only changing the valuation of one particular buyer for the good offered by a particular seller changes the payoffs of this buyer and seller by the same amount.
Resumo:
Elemental analysis can become an important piece of evidence to assist the solution of a case. The work presented in this dissertation aims to evaluate the evidential value of the elemental composition of three particular matrices: ink, paper and glass. In the first part of this study, the analytical performance of LIBS and LA-ICP-MS methods was evaluated for paper, writing inks and printing inks. A total of 350 ink specimens were examined including black and blue gel inks, ballpoint inks, inkjets and toners originating from several manufacturing sources and/or batches. The paper collection set consisted of over 200 paper specimens originating from 20 different paper sources produced by 10 different plants. Micro-homogeneity studies show smaller variation of elemental compositions within a single source (i.e., sheet, pen or cartridge) than the observed variation between different sources (i.e., brands, types, batches). Significant and detectable differences in the elemental profile of the inks and paper were observed between samples originating from different sources (discrimination of 87–100% of samples, depending on the sample set under investigation and the method applied). These results support the use of elemental analysis, using LA-ICP-MS and LIBS, for the examination of documents and provide additional discrimination to the currently used techniques in document examination. In the second part of this study, a direct comparison between four analytical methods (µ-XRF, solution-ICP-MS, LA-ICP-MS and LIBS) was conducted for glass analyses using interlaboratory studies. The data provided by 21 participants were used to assess the performance of the analytical methods in associating glass samples from the same source and differentiating different sources, as well as the use of different match criteria (confidence interval (±6s, ±5s, ±4s, ±3s, ±2s), modified confidence interval, t-test (sequential univariate, p=0.05 and p=0.01), t-test with Bonferroni correction (for multivariate comparisons), range overlap, and Hotelling's T2 tests. Error rates (Type 1 and Type 2) are reported for the use of each of these match criteria and depend on the heterogeneity of the glass sources, the repeatability between analytical measurements, and the number of elements that were measured. The study provided recommendations for analytical performance-based parameters for µ-XRF and LA-ICP-MS as well as the best performing match criteria for both analytical techniques, which can be applied now by forensic glass examiners.
Resumo:
Many tracking algorithms have difficulties dealing with occlusions and background clutters, and consequently don't converge to an appropriate solution. Tracking based on the mean shift algorithm has shown robust performance in many circumstances but still fails e.g. when encountering dramatic intensity or colour changes in a pre-defined neighbourhood. In this paper, we present a robust tracking algorithm that integrates the advantages of mean shift tracking with those of tracking local invariant features. These features are integrated into the mean shift formulation so that tracking is performed based both on mean shift and feature probability distributions, coupled with an expectation maximisation scheme. Experimental results show robust tracking performance on a series of complicated real image sequences. © 2010 IEEE.
Resumo:
At the jamming transition, amorphous packings are known to display anomalous vibrational modes with a density of states (DOS) that remains constant at low frequency. The scaling of the DOS at higher packing fractions remains, however, unclear. One might expect to find a simple Debye scaling, but recent results from effective medium theory and the exact solution of mean-field models both predict an anomalous, non-Debye scaling. Being mean-field in nature, however, these solutions are only strictly valid in the limit of infinite spatial dimension, and it is unclear what value they have for finite-dimensional systems. Here, we study packings of soft spheres in dimensions 3 through 7 and find, away from jamming, a universal non-Debye scaling of the DOS that is consistent with the mean-field predictions. We also consider how the soft mode participation ratio evolves as dimension increases.
Resumo:
We report d18O and minor element (Mg/Ca, Sr/Ca) data acquired by high-resolution, in situ secondary ion mass spectrometry (SIMS) from planktic foraminiferal shells and 100-500 µm sized diagenetic crystallites recovered from a deep-sea record (ODP Site 865) of the Paleocene-Eocene thermal maximum (PETM). The d18O of crystallites (~1.2 per mil Pee Dee Belemnite (PDB)) is ~4.8 per mil higher than that of planktic foraminiferal calcite (-3.6 per mil PDB), while crystallite Mg/Ca and Sr/Ca ratios are slightly higher and substantially lower than in planktic foraminiferal calcite, respectively. The focused stratigraphic distribution of the crystallites signals an association with PETM conditions; hence, we attribute their formation to early diagenesis initially sourced by seafloor dissolution (burndown) ensued by reprecipitation at higher carbonate saturation. The Mg/Ca ratios of the crystallites are an order of magnitude lower than those predicted by inorganic precipitation experiments, which may reflect a degree of inheritance from "donor" phases of biogenic calcite that underwent solution in the sediment column. In addition, SIMS d18O and electron microprobe Mg/Ca analyses that were taken within a planktic foraminiferal shell yield parallel increases along traverses that coincide with muricae blades on the chamber wall. The parallel d18O and Mg/Ca increases indicate a diagenetic origin for the blades, but their d18O value (-0.5 per mil PDB) is lower than that of crystallites suggesting that these two phases of diagenetic carbonate formed at different times. Finally, we posit that elevated levels of early diagenesis acted in concert with sediment mixing and carbonate dissolution to attenuate the d18O decrease signaling PETM warming in "whole-shell" records published for Site 865.
Resumo:
Queueing theory is the mathematical study of ‘queue’ or ‘waiting lines’ where an item from inventory is provided to the customer on completion of service. A typical queueing system consists of a queue and a server. Customers arrive in the system from outside and join the queue in a certain way. The server picks up customers and serves them according to certain service discipline. Customers leave the system immediately after their service is completed. For queueing systems, queue length, waiting time and busy period are of primary interest to applications. The theory permits the derivation and calculation of several performance measures including the average waiting time in the queue or the system, mean queue length, traffic intensity, the expected number waiting or receiving service, mean busy period, distribution of queue length, and the probability of encountering the system in certain states, such as empty, full, having an available server or having to wait a certain time to be served.
Resumo:
Le bois subit une demande croissante comme matériau de construction dans les bâtiments de grandes dimensions. Ses qualités de matériau renouvelable et esthétique le rendent attrayant pour les architectes. Lorsque comparé à des produits fonctionnellement équivalents, il apparait que le bois permet de réduire la consommation d’énergie non-renouvelable. Sa transformation nécessite une quantité d’énergie inférieure que l’acier et le béton. Par ailleurs, par son origine biologique, une structure en bois permet de stocker du carbone biogénique pour la durée de vie du bâtiment. Maintenant permis jusqu’à six étages de hauteur au Canada, les bâtiments de grande taille en bois relèvent des défis de conception. Lors du dimensionnement des structures, les zones des connecteurs sont souvent les points critiques. Effectivement, les contraintes y sont maximales. Les structures peuvent alors apparaitre massives et diminuer l’innovation architecturale. De nouvelles stratégies doivent donc être développées afin d’améliorer la résistance mécanique dans les zones de connecteurs. Différents travaux ont récemment porté sur la création ou l’amélioration de types d’assemblage. Dans cette étude, l’accent est mis sur le renforcement du bois utilisé dans la région de connexion. L’imprégnation a été choisie comme solution de renfort puisque la littérature démontre qu’il est possible d’augmenter la dureté du bois avec cette technique. L’utilisation de cette stratégie de renfort sur l’épinette noire (Picea Mariana (Mill.) BSP) pour une application structurale est l’élément de nouveauté dans cette recherche. À défaut d’effectuer une imprégnation jusqu’au coeur des pièces, l’essence peu perméable de bois employée favorise la création d’une mince couche en surface traitée sans avoir à utiliser une quantité importante de produits chimiques. L’agent d’imprégnation est composé de 1,6 hexanediol diacrylate, de triméthylopropane tricacrylate et d’un oligomère de polyester acrylate. Une deuxième formulation contenant des nanoparticules de SiO2 a permis de vérifier l’effet des nanoparticules sur l’augmentation de la résistance mécanique du bois. Ainsi, dans ce projet, un procédé d’imprégnation vide-pression a servi à modifier un nouveau matériau à base de bois permettant des assemblages plus résistants mécaniquement. Le test de portance locale à l’enfoncement parallèle au fil d’un connecteur de type tige a été réalisé afin de déterminer l’apport du traitement sur le bois utilisé comme élément de connexion. L’effet d’échelle a été observé par la réalisation du test avec trois diamètres de boulons différents (9,525 mm, 12,700 mm et 15,875 mm). En outre, le test a été effectué selon un chargement perpendiculaire au fil pour le boulon de moyen diamètre (12,700 mm). La corrélation d’images numériques a été utilisée comme outil d’analyse de la répartition des contraintes dans le bois. Les résultats ont démontré une portance du bois plus élevée suite au traitement. Par ailleurs, l’efficacité est croissante lorsque le diamètre du boulon diminue. C’est un produit avec une valeur caractéristique de la portance locale parallèle au fil de 79% supérieure qui a été créé dans le cas du test avec le boulon de 9,525 mm. La raideur du bois a subi une augmentation avoisinant les 30%. Suite au traitement, la présence d’une rupture par fissuration est moins fréquente. Les contraintes se distribuent plus largement autour de la région de connexion. Le traitement n’a pas produit d’effet significatif sur la résistance mécanique de l’assemblage dans le cas d’un enfoncement du boulon perpendiculairement au fil du bois. De même, l’effet des nanoparticules en solution n’est pas ressorti significatif. Malgré une pénétration très faible du liquide à l’intérieur du bois, la couche densifiée en surface créée suite au traitement est suffisante pour produire un nouveau matériau plus résistant dans les zones de connexion. Le renfort du bois dans la région des connecteurs doit influencer le dimensionnement des structures de grande taille. Avec des éléments de connexion renforcés, il sera possible d’allonger les portées des poutres, multipliant ainsi les possibilités architecturales. Le renfort pourra aussi permettre de réduire les sections des poutres et d’utiliser une quantité moindre de bois dans un bâtiment. Cela engendrera des coûts de transport et des coûts reliés au temps d’assemblage réduits. De plus, un connecteur plus résistant permettra d’être utilisé en moins grande quantité dans un assemblage. Les coûts d’approvisionnement en éléments métalliques et le temps de pose sur le site pourront être revus à la baisse. Les avantages d’un nouveau matériau à base de bois plus performant utilisé dans les connexions permettront de promouvoir le bois dans les constructions de grande taille et de réduire l’impact environnemental des bâtiments.
Resumo:
Group supervision is used for support, education and/or monitoring. Despite the potential value of these elements for school staff, it is rarely practised. This mixed methods research, from a critical realist perspective, explored the use of Solution Circles to structure staff supervision groups in three schools. Five circles were run in each school, involving thirty-one participants, eighteen of whom contributed data. Thirteen staff trained as facilitators. The self-efficacy, resilience and anxiety levels of the staff taking part were not found to be significantly different as a result of the intervention. However, a small effect size was noted for self-efficacy, perhaps worthy of further investigation in the context of the small sample size. Thematic analysis of participant feedback (gathered during the last circle, which ran as a Focus Group) indicated the following mechanisms as affecting the value of Solution Circles for staff supervision groups: the structure of the sessions; aspects linked to the groups meeting a ‘need to talk’; elements which helped participants to ‘feel like a team’; and, school context factors. Semi-structured interview data from six facilitators indicated that the structure of the circles, individual characteristics of facilitators, the provision of support for facilitators, and elements of the wider school context, were all mechanisms which affected the facilitation of the programme. Further research might implement elements of these mechanisms and measure their impact.
Resumo:
In recent years, the luxury market has entered a period of very modest growth, which has been dubbed the ‘new normal’, where varying tourist flows, currency fluctuations, and shifted consumer tastes dictate the terms. The modern luxury consumer is a fickle mistress. Especially millennials – people born in the 1980s and 1990s – are the embodiment of this new form of demanding luxury consumer with particular tastes and values. Modern consumers, and specifically millennials, want experiences and free time, and are interested in a brand’s societal position and environmental impact. The purpose of this thesis is to investigate what the luxury value perceptions of millennials in higher education are in Europe, seeing as many of the most prominent luxury goods companies in the world originate from Europe. Perceived luxury value is herein examined from the individual’s perspective. As values and value perceptions are complex constructs, using qualitative research methods is justifiable. The data for thesis has been gathered by means of a group interview. The interview participants all study hospitality management in a private college, and each represent a different nationality. Cultural theories and research on luxury and luxury values provide the scientific foundation for this thesis, and a multidimensional luxury value model is used as a theoretical tool in sorting and analyzing the data. The results show that millennials in Europe value much more than simply modern and hard luxury. Functional, financial, individual, and social aspects are all present in perceived luxury value, but some more in a negative sense than others. Conspicuous, status-seeking consumption is mostly frowned upon, as is the consumption of luxury goods for the sake of satisfying social requisites and peer pressure. Most of the positive value perceptions are attributed to the functional dimension, as luxury products are seen to come with a promise of high quality and reliability, which justifies any price premiums. Ecological and ethical aspects of luxury are already a contemporary trend, but perceived even more as an important characteristic of luxury in the future. Most importantly, having time is fundamental. Depending on who is asked, luxury can mean anything, just as much as it can mean nothing.
Resumo:
The challenge of detecting a change in the distribution of data is a sequential decision problem that is relevant to many engineering solutions, including quality control and machine and process monitoring. This dissertation develops techniques for exact solution of change-detection problems with discrete time and discrete observations. Change-detection problems are classified as Bayes or minimax based on the availability of information on the change-time distribution. A Bayes optimal solution uses prior information about the distribution of the change time to minimize the expected cost, whereas a minimax optimal solution minimizes the cost under the worst-case change-time distribution. Both types of problems are addressed. The most important result of the dissertation is the development of a polynomial-time algorithm for the solution of important classes of Markov Bayes change-detection problems. Existing techniques for epsilon-exact solution of partially observable Markov decision processes have complexity exponential in the number of observation symbols. A new algorithm, called constellation induction, exploits the concavity and Lipschitz continuity of the value function, and has complexity polynomial in the number of observation symbols. It is shown that change-detection problems with a geometric change-time distribution and identically- and independently-distributed observations before and after the change are solvable in polynomial time. Also, change-detection problems on hidden Markov models with a fixed number of recurrent states are solvable in polynomial time. A detailed implementation and analysis of the constellation-induction algorithm are provided. Exact solution methods are also established for several types of minimax change-detection problems. Finite-horizon problems with arbitrary observation distributions are modeled as extensive-form games and solved using linear programs. Infinite-horizon problems with linear penalty for detection delay and identically- and independently-distributed observations can be solved in polynomial time via epsilon-optimal parameterization of a cumulative-sum procedure. Finally, the properties of policies for change-detection problems are described and analyzed. Simple classes of formal languages are shown to be sufficient for epsilon-exact solution of change-detection problems, and methods for finding minimally sized policy representations are described.
Resumo:
Skepticism of promised value-added is forcing suppliers to provide tangible evidence of the value they can deliver for the customers in industrial markets. Despite this, quantifying customer benefits is being thought as one of the most difficult part in business-to-business selling. The objective of this research is to identify the desired and perceived customer benefits of KONE JumpLift™ and improve the overall customer value quantification and selling process of the solution. The study was conducted with a qualitative case analysis including 7 interviews with key stakeholders from three different market areas. The market areas were chosen based on where the offering has been utilized and the research was conducted by five telephone and two email interviews. The main desired and perceived benefits include many different values for example economical, functional, symbolic and epistemic value but they vary on studied market areas. The most important result of the research was finding the biggest challenges of selling the offering which are communicating and proving the potential value to the customers. In addition, the sales arguments have different relative importance in studied market areas which create challenges for salespeople to sell the offering effectively. In managerial level this means need for investing into a new sales tool and training the salespeople.