966 resultados para alternative methods


Relevância:

30.00% 30.00%

Publicador:

Resumo:

In today's logistics environment, there is a tremendous need for accurate cost information and cost allocation. Companies searching for the proper solution often come across with activity-based costing (ABC) or one of its variations which utilizes cost drivers to allocate the costs of activities to cost objects. In order to allocate the costs accurately and reliably, the selection of appropriate cost drivers is essential in order to get the benefits of the costing system. The purpose of this study is to validate the transportation cost drivers of a Finnish wholesaler company and ultimately select the best possible driver alternatives for the company. The use of cost driver combinations as an alternative is also studied. The study is conducted as a part of case company's applied ABC-project using the statistical research as the main research method supported by a theoretical, literature based method. The main research tools featured in the study include simple and multiple regression analyses, which together with the literature and observations based practicality analysis forms the basis for the advanced methods. The results suggest that the most appropriate cost driver alternatives are the delivery drops and internal delivery weight. The possibility of using cost driver combinations is not suggested as their use doesn't provide substantially better results while increasing the measurement costs, complexity and load of use at the same time. The use of internal freight cost drivers is also questionable as the results indicate weakening trend in the cost allocation capabilities towards the end of the period. Therefore more research towards internal freight cost drivers should be conducted before taking them in use.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The resistance of barnyardgrass (Echinochloa crus-galli) to imidazolinone herbicides is a worldwide problem in paddy fields. A rapid diagnosis is required for the selection of adequate prevention and control practices. The objectives of this study were to develop expedite bioassays to identify the resistance to imidazolinone herbicides in barnyardgrass and to evaluate the efficacy of alternative herbicides for the post-emergence control of resistant biotypes. Three experiments were conducted to develop methods for diagnosis of resistance to imazethapyr and imazapyr + imazapic in barnyardgrass at the seed, seedling and tiller stages, and to carry out a pot experiment to determine the efficacy of six herbicides applied at post-emergence in 13 biotypes of barnyardgrass resistant to imidazolinones. The seed soaking bioassay was not able to differentiate the resistant and susceptible biotypes. The resistance of barnyardgrass to imidazolinones was effectively discriminated in the seedlings and tiller bioassays seven days after incubation at the concentrations of 0.001 and 0.0001 mM, respectively, for both imazethapyr and imazapyr + imazapic. The biotypes identified as resistant to imidazolinones showed different patterns of susceptibility to penoxsulam, bispyribac-sodium and pyrazosulfuron-ethyl, and were all controlled with profoxydim and cyhalofop-butyl. The seedling and tiller bioassays are effective in the diagnosis of barnyardgrass resistance to imidazolinone herbicides, providing an on-season opportunity to identify the need to use alternative herbicides to be applied at post-emergence for the control of the resistant biotypes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Preparative liquid chromatography is one of the most selective separation techniques in the fine chemical, pharmaceutical, and food industries. Several process concepts have been developed and applied for improving the performance of classical batch chromatography. The most powerful approaches include various single-column recycling schemes, counter-current and cross-current multi-column setups, and hybrid processes where chromatography is coupled with other unit operations such as crystallization, chemical reactor, and/or solvent removal unit. To fully utilize the potential of stand-alone and integrated chromatographic processes, efficient methods for selecting the best process alternative as well as optimal operating conditions are needed. In this thesis, a unified method is developed for analysis and design of the following singlecolumn fixed bed processes and corresponding cross-current schemes: (1) batch chromatography, (2) batch chromatography with an integrated solvent removal unit, (3) mixed-recycle steady state recycling chromatography (SSR), and (4) mixed-recycle steady state recycling chromatography with solvent removal from fresh feed, recycle fraction, or column feed (SSR–SR). The method is based on the equilibrium theory of chromatography with an assumption of negligible mass transfer resistance and axial dispersion. The design criteria are given in general, dimensionless form that is formally analogous to that applied widely in the so called triangle theory of counter-current multi-column chromatography. Analytical design equations are derived for binary systems that follow competitive Langmuir adsorption isotherm model. For this purpose, the existing analytic solution of the ideal model of chromatography for binary Langmuir mixtures is completed by deriving missing explicit equations for the height and location of the pure first component shock in the case of a small feed pulse. It is thus shown that the entire chromatographic cycle at the column outlet can be expressed in closed-form. The developed design method allows predicting the feasible range of operating parameters that lead to desired product purities. It can be applied for the calculation of first estimates of optimal operating conditions, the analysis of process robustness, and the early-stage evaluation of different process alternatives. The design method is utilized to analyse the possibility to enhance the performance of conventional SSR chromatography by integrating it with a solvent removal unit. It is shown that the amount of fresh feed processed during a chromatographic cycle and thus the productivity of SSR process can be improved by removing solvent. The maximum solvent removal capacity depends on the location of the solvent removal unit and the physical solvent removal constraints, such as solubility, viscosity, and/or osmotic pressure limits. Usually, the most flexible option is to remove solvent from the column feed. Applicability of the equilibrium design for real, non-ideal separation problems is evaluated by means of numerical simulations. Due to assumption of infinite column efficiency, the developed design method is most applicable for high performance systems where thermodynamic effects are predominant, while significant deviations are observed under highly non-ideal conditions. The findings based on the equilibrium theory are applied to develop a shortcut approach for the design of chromatographic separation processes under strongly non-ideal conditions with significant dispersive effects. The method is based on a simple procedure applied to a single conventional chromatogram. Applicability of the approach for the design of batch and counter-current simulated moving bed processes is evaluated with case studies. It is shown that the shortcut approach works the better the higher the column efficiency and the lower the purity constraints are.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Knowledge of the radiochemical purity of radiopharmaceuticals is mandatory and can be evaluated by several methods and techniques. Planar chromatography is the technique normally employed in nuclear medicine since it is simple, rapid and usually of low cost. There is no standard system for the chromatographic technique, but price, separation efficiency and short time for execution must be considered. We have studied an alternative system using common chromatographic stationary phase and alcohol or alcohol:chloroform mixtures as the mobile phase, using the lipophilic radiopharmaceutical [99mTc(MIBI)6]+ as a model. Whatman 1 modified phase paper and absolute ethanol, Whatman 1 paper and methanol:chloroform (25:75), Whatman 3MM paper and ethanol:chloroform (25:75), and the more expensive ITLC-SG and 1-propanol:chloroform (10:90) were suitable systems for the direct determination of radiochemical purity of [99mTc(MIBI)6]+ since impurities such as99mTc-reduced-hydrolyzed (RH),99mTcO4- and [99mTc(cysteine)2]-complex were completely separated from the radiopharmaceutical, which moved toward the front of chromatographic systems while impurities were retained at the origin. The time required for analysis was 4 to 15 min, which is appropriate for nuclear medicine routines.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

SUMMARY Organizational creativity – hegemonic and alternative discourses Over the course of recent developments in the societal and business environment, the concept of creativity has been brought into new arenas. The rise of ‘creative industries’ and the idea of creativity as a form of capital have attracted the interests of business and management professionals – as well as academics. As the notion of creativity has been adopted in the organization studies literature, the concept of organizational creativity has been introduced to refer to creativity that takes place in an organizational context. This doctoral thesis focuses on organizational creativity, and its purpose is to explore and problematize the hegemonic organizational creativity discourse and to provide alternative viewpoints for theorizing about creativity in organizations. Taking a discourse theory approach, this thesis, first, provides an outline of the currently predominant, i.e. hegemonic, discourse on organizational creativity, which is explored regarding themes, perspectives, methods and paradigms. Second, this thesis consists of five studies that act as illustrations of certain alternative viewpoints. Through these exemplary studies, this thesis sheds light on the limitations and taken-for-granted aspects of the hegemonic discourse and discusses what these alternative viewpoints could offer for the understanding of and theorizing for organizational creativity. This study leans on an assumption that the development of organizational creativity knowledge and the related discourse is not inevitable or progressive but rather contingent. The organizational creativity discourse has developed in a certain direction, meaning that some themes, perspectives, and methods, as well as assumptions, values, and objectives, have gained a hegemonic position over others, and are therefore often taken for granted and considered valid and relevant. The hegemonization of certain aspects, however, contributes to the marginalization of others. The thesis concludes that the hegemonic discourse on organizational creativity is based on an extensive coverage of certain themes and perspectives, such as those focusing on individual cognitive processes, motivation, or organizational climate and their relation to creativity, to name a few. The limited focus on some themes and the confinement to certain prevalent perspectives, however, results in the marginalization of other themes and perspectives. The negative, often unintended, consequences, implications, and side effects of creativity, the factors that might hinder or prevent creativity, and a deeper inquiry into the ontology and epistemology of creativity have attracted relatively marginal interest. The material embeddedness of organizational creativity, in other words, the physical organizational environment as well as the human body and its non-cognitive resources, has largely been overlooked in the hegemonic discourse, although thereare studies in this area that give reason to believe that they might prove relevant for the understanding of creativity. The hegemonic discourse is based on an individual-centered understanding of creativity which overattributes creativity to an individual and his/her cognitive capabilities, while simultaneously neglecting how, for instance, the physical environment, artifacts, social dynamics and interactions condition organizational creativity. Due to historical reasons, quantitative as well as qualitative yet functionally- oriented studies have predominated the organizational creativity discourse, although studies falling into the interpretationist paradigm have gradually become more popular. The two radical paradigms, as well as methodological and analytical approaches typical of radical research, can be considered to hold a marginal position in the field of organizational creativity. The hegemonic organizational creativity discourse has provided extensive findings related to many aspects of organizational creativity, although the con- ceptualizations and understandings of organizational creativity in the hegemonic discourse are also in many respects limited and one-sided. The hegemonic discourse is based on an assumption that creativity is desirable, good, necessary, or even obligatory, and should be encouraged and nourished. The conceptualiza- tions of creativity favor the kind of creativity which is useful, valuable and can be harnessed for productivity. The current conceptualization is limited to the type of creativity that is acceptable and fits the managerial ideology, and washes out any risky, seemingly useless, or negative aspects of creativity. It also limits the possible meanings and representations that ‘creativity’ has in the respective discourse, excluding many meanings of creativity encountered in other discourses. The excessive focus on creativity that is good, positive, productive and fits the managerial agenda while ignoring other forms and aspects of creativity, however, contributes to the dilution of the notion. Practices aimed at encouraging the kind of creativity may actually entail a risk of fostering moderate alterations rather than more radical novelty, as well as management and organizational practices which limit creative endeavors, rather than increase their likelihood. The thesis concludes that although not often given the space and attention they deserve, there are alternative conceptualizations and understandings of organizational creativity which embrace a broader notion of creativity. The inability to accommodate the ‘other’ understandings and viewpoints within the organizational creativity discourse runs a risk of misrepresenting the complex and many-sided phenomenon of creativity in organizational context. Keywords: Organizational creativity, creativity, organization studies, discourse theory, hegemony

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Pairs trading is an algorithmic trading strategy that is based on the historical co-movement of two separate assets and trades are executed on the basis of degree of relative mispricing. The purpose of this study is to explore one new and alternative copula-based method for pairs trading. The objective is to find out whether the copula method generates more trading opportunities and higher profits than the more traditional distance and cointegration methods applied extensively in previous empirical studies. Methods are compared by selecting top five pairs from stocks of the large and medium-sized companies in the Finnish stock market. The research period includes years 2006-2015. All the methods are proven to be profitable and the Finnish stock market suitable for pairs trading. However, copula method doesn’t generate more trading opportunities or higher profits than the other methods. It seems that the limitations of the more traditional methods are not too restrictive for this particular sample data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The future of paying in the age of digitalization is a topic that includes varied visions. This master’s thesis explores images of the future of paying in the Single Euro Payment Area (SEPA) up to 2020 and 2025 through the views of experts specialized in paying. This study was commissioned by a credit management company in order to obtain more detailed information about the future of paying. Specifically, this thesis investigates what could be the most used payment methods in the future, what items could work as a medium of exchange in 2020 and how will they evolve towards the year 2025. Changing consumer behavior, trends connected to payment methods, security and private issues of new cashless payment methods were also part of this study. In the empirical part of the study the experts’ ideas about probable and preferable future images of paying were investigated through a two-round Disaggregative Delphi method. The questionnaire included numeric statements and open questions. Three alternative future images were created with the help of cluster analysis: “Unsurprising Future”, “Technology Driven Future” and “The Age of the Customer”. The plausible images had similarities and differences, which were reflected to the previous studies in the literature review. The study’s findings were formed based on the images of futures’ similarities and to the open questions answers that were received from the questionnaire. The main conclusion of the study was that development of technology will unify and diversify SEPA; the trend in 2020 seems to be towards more cashless payment methods but their usage depends on the countries’ financial possibilities and customer preferences. Mobile payments, cards and cash will be the main payment methods but the banks will have competitors from outside the financial sector. Wearable payment methods and NFC technology are seen as widely growing trends but subcutaneous payment devices will likely keep their niche position until 2025. In the meantime, security and private issues are seen to increase because of identity thefts and various frauds. Simultaneously, privacy will lose its meaning to younger consumers who are used to sharing their transaction and personal data with third parties in order to get access to attractive services. Easier access to consumers’ transaction data will probably open the door for hackers and cause new risks in paying processes. There exist many roads to future, and this study was not an attempt to give any complete answers about it even if some plausible assumptions about the future’s course were provided.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper employs the one-sector Real Business Cycle model as a testing ground for four different procedures to estimate Dynamic Stochastic General Equilibrium (DSGE) models. The procedures are: 1 ) Maximum Likelihood, with and without measurement errors and incorporating Bayesian priors, 2) Generalized Method of Moments, 3) Simulated Method of Moments, and 4) Indirect Inference. Monte Carlo analysis indicates that all procedures deliver reasonably good estimates under the null hypothesis. However, there are substantial differences in statistical and computational efficiency in the small samples currently available to estimate DSGE models. GMM and SMM appear to be more robust to misspecification than the alternative procedures. The implications of the stochastic singularity of DSGE models for each estimation method are fully discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Contexte. Les études cas-témoins sont très fréquemment utilisées par les épidémiologistes pour évaluer l’impact de certaines expositions sur une maladie particulière. Ces expositions peuvent être représentées par plusieurs variables dépendant du temps, et de nouvelles méthodes sont nécessaires pour estimer de manière précise leurs effets. En effet, la régression logistique qui est la méthode conventionnelle pour analyser les données cas-témoins ne tient pas directement compte des changements de valeurs des covariables au cours du temps. Par opposition, les méthodes d’analyse des données de survie telles que le modèle de Cox à risques instantanés proportionnels peuvent directement incorporer des covariables dépendant du temps représentant les histoires individuelles d’exposition. Cependant, cela nécessite de manipuler les ensembles de sujets à risque avec précaution à cause du sur-échantillonnage des cas, en comparaison avec les témoins, dans les études cas-témoins. Comme montré dans une étude de simulation précédente, la définition optimale des ensembles de sujets à risque pour l’analyse des données cas-témoins reste encore à être élucidée, et à être étudiée dans le cas des variables dépendant du temps. Objectif: L’objectif général est de proposer et d’étudier de nouvelles versions du modèle de Cox pour estimer l’impact d’expositions variant dans le temps dans les études cas-témoins, et de les appliquer à des données réelles cas-témoins sur le cancer du poumon et le tabac. Méthodes. J’ai identifié de nouvelles définitions d’ensemble de sujets à risque, potentiellement optimales (le Weighted Cox model and le Simple weighted Cox model), dans lesquelles différentes pondérations ont été affectées aux cas et aux témoins, afin de refléter les proportions de cas et de non cas dans la population source. Les propriétés des estimateurs des effets d’exposition ont été étudiées par simulation. Différents aspects d’exposition ont été générés (intensité, durée, valeur cumulée d’exposition). Les données cas-témoins générées ont été ensuite analysées avec différentes versions du modèle de Cox, incluant les définitions anciennes et nouvelles des ensembles de sujets à risque, ainsi qu’avec la régression logistique conventionnelle, à des fins de comparaison. Les différents modèles de régression ont ensuite été appliqués sur des données réelles cas-témoins sur le cancer du poumon. Les estimations des effets de différentes variables de tabac, obtenues avec les différentes méthodes, ont été comparées entre elles, et comparées aux résultats des simulations. Résultats. Les résultats des simulations montrent que les estimations des nouveaux modèles de Cox pondérés proposés, surtout celles du Weighted Cox model, sont bien moins biaisées que les estimations des modèles de Cox existants qui incluent ou excluent simplement les futurs cas de chaque ensemble de sujets à risque. De plus, les estimations du Weighted Cox model étaient légèrement, mais systématiquement, moins biaisées que celles de la régression logistique. L’application aux données réelles montre de plus grandes différences entre les estimations de la régression logistique et des modèles de Cox pondérés, pour quelques variables de tabac dépendant du temps. Conclusions. Les résultats suggèrent que le nouveau modèle de Cox pondéré propose pourrait être une alternative intéressante au modèle de régression logistique, pour estimer les effets d’expositions dépendant du temps dans les études cas-témoins

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La leucémie lymphoblastique aigüe (LLA) est une maladie génétique complexe. Malgré que cette maladie hématologique soit le cancer pédiatrique le plus fréquent, ses causes demeurent inconnues. Des études antérieures ont démontrées que le risque à la LLA chez l’enfant pourrait être influencé par des gènes agissant dans le métabolisme des xénobiotiques, dans le maintient de l’intégrité génomique et dans la réponse au stress oxydatif, ainsi que par des facteurs environnementaux. Au cours de mes études doctorales, j’ai tenté de disséquer davantage les bases génétiques de la LLA de l’enfant en postulant que la susceptibilité à cette maladie serait modulée, au moins en partie, par des variants génétiques agissant dans deux voies biologiques fondamentales : le point de contrôle G1/S du cycle cellulaire et la réparation des cassures double-brin de l’ADN. En utilisant une approche unique reposant sur l’analyse d’une cohorte cas-contrôles jumelée à une cohorte de trios enfants-parents, j’ai effectué une étude d’association de type gènes/voies biologiques candidats. Ainsi, j’ai évaluer le rôle de variants provenant de la séquence promotrice de 12 gènes du cycle cellulaire et de 7 gènes de la voie de réparation de l’ADN, dans la susceptibilité à la LLA. De tels polymorphismes dans la région promotrice (pSNPs) pourraient perturber la liaison de facteurs de transcription et mener à des différences dans les niveaux d’expression des gènes pouvant influencer le risque à la maladie. En combinant différentes méthodes analytiques, j’ai évalué le rôle de différents mécanismes génétiques dans le développement de la LLA chez l’enfant. J’ai tout d’abord étudié les associations avec gènes/variants indépendants, et des essaies fonctionnels ont été effectués afin d’évaluer l’impact des pSNPs sur la liaison de facteurs de transcription et l’activité promotrice allèle-spécifique. Ces analyses ont mené à quatre publications. Il est peu probable que ces gènes de susceptibilité agissent seuls; j’ai donc utilisé une approche intégrative afin d’explorer la possibilité que plusieurs variants d’une même voie biologique ou de voies connexes puissent moduler le risque de la maladie; ces travaux ont été soumis pour publication. En outre, le développement précoce de la LLA, voir même in utero, suggère que les parents, et plus particulièrement la mère, pourraient jouer un rôle important dans le développement de cette maladie chez l’enfant. Dans une étude par simulations, j’ai évalué la performance des méthodes d’analyse existantes de détecter des effets fœto-maternels sous un design hybride trios/cas-contrôles. J’ai également investigué l’impact des effets génétiques agissant via la mère sur la susceptibilité à la LLA. Cette étude, récemment publiée, fût la première à démontrer que le risque de la leucémie chez l’enfant peut être modulé par le génotype de sa mère. En conclusions, mes études doctorales ont permis d’identifier des nouveaux gènes de susceptibilité pour la LLA pédiatrique et de mettre en évidence le rôle du cycle cellulaire et de la voie de la réparation de l’ADN dans la leucémogenèse. À terme, ces travaux permettront de mieux comprendre les bases génétiques de la LLA, et conduiront au développement d’outils cliniques qui amélioreront la détection, le diagnostique et le traitement de la leucémie chez l’enfant.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Modern computer systems are plagued with stability and security problems: applications lose data, web servers are hacked, and systems crash under heavy load. Many of these problems or anomalies arise from rare program behavior caused by attacks or errors. A substantial percentage of the web-based attacks are due to buffer overflows. Many methods have been devised to detect and prevent anomalous situations that arise from buffer overflows. The current state-of-art of anomaly detection systems is relatively primitive and mainly depend on static code checking to take care of buffer overflow attacks. For protection, Stack Guards and I-leap Guards are also used in wide varieties.This dissertation proposes an anomaly detection system, based on frequencies of system calls in the system call trace. System call traces represented as frequency sequences are profiled using sequence sets. A sequence set is identified by the starting sequence and frequencies of specific system calls. The deviations of the current input sequence from the corresponding normal profile in the frequency pattern of system calls is computed and expressed as an anomaly score. A simple Bayesian model is used for an accurate detection.Experimental results are reported which show that frequency of system calls represented using sequence sets, captures the normal behavior of programs under normal conditions of usage. This captured behavior allows the system to detect anomalies with a low rate of false positives. Data are presented which show that Bayesian Network on frequency variations responds effectively to induced buffer overflows. It can also help administrators to detect deviations in program flow introduced due to errors.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the tropics, a large number of smallholder farms contribute significantly to food security by raising pigs and poultry for domestic consumption and for sale on local markets. The high cost and, sometimes, the lack of availability of commercial protein supplements is one of the main limitations to efficient animal production by smallholders. Locally-grown forages and grain legumes offer ecological benefits such as nitrogen fixation, soil improvement, and erosion control which contribute to improve cropping efficiency. Besides these agronomical assets, they can be used as animal feeds in mixed farming systems. In this paper we review options to include locally-grown forages and grain legumes as alternative protein sources in the diets of pigs and poultry in order to reduce farmers’ dependence on externally-purchased protein concentrates. The potential nutritive value of a wide range of forages and grain legumes is presented and discussed. The influence of dietary fibre and plant secondary metabolites contents and their antinutritive consequences on feed intake, digestive processes and animal performances are considered according to the varying composition in those compounds of the different plant species and cultivars covered in this review. Finally, methods to overcome the antinutritive attributes of the plant secondary metabolites using heat, chemical or biological treatment are reviewed regarding their efficiency and their suitability in low input farming systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the accounting literature, interaction or moderating effects are usually assessed by means of OLS regression and summated rating scales are constructed to reduce measurement error bias. Structural equation models and two-stage least squares regression could be used to completely eliminate this bias, but large samples are needed. Partial Least Squares are appropriate for small samples but do not correct measurement error bias. In this article, disattenuated regression is discussed as a small sample alternative and is illustrated on data of Bisbe and Otley (in press) that examine the interaction effect of innovation and style of use of budgets on performance. Sizeable differences emerge between OLS and disattenuated regression

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We compare correspondance análisis to the logratio approach based on compositional data. We also compare correspondance análisis and an alternative approach using Hellinger distance, for representing categorical data in a contingency table. We propose a coefficient which globally measures the similarity between these approaches. This coefficient can be decomposed into several components, one component for each principal dimension, indicating the contribution of the dimensions to the difference between the two representations. These three methods of representation can produce quite similar results. One illustrative example is given

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The statistical analysis of literary style is the part of stylometry that compares measurable characteristics in a text that are rarely controlled by the author, with those in other texts. When the goal is to settle authorship questions, these characteristics should relate to the author’s style and not to the genre, epoch or editor, and they should be such that their variation between authors is larger than the variation within comparable texts from the same author. For an overview of the literature on stylometry and some of the techniques involved, see for example Mosteller and Wallace (1964, 82), Herdan (1964), Morton (1978), Holmes (1985), Oakes (1998) or Lebart, Salem and Berry (1998). Tirant lo Blanc, a chivalry book, is the main work in catalan literature and it was hailed to be “the best book of its kind in the world” by Cervantes in Don Quixote. Considered by writters like Vargas Llosa or Damaso Alonso to be the first modern novel in Europe, it has been translated several times into Spanish, Italian and French, with modern English translations by Rosenthal (1996) and La Fontaine (1993). The main body of this book was written between 1460 and 1465, but it was not printed until 1490. There is an intense and long lasting debate around its authorship sprouting from its first edition, where its introduction states that the whole book is the work of Martorell (1413?-1468), while at the end it is stated that the last one fourth of the book is by Galba (?-1490), after the death of Martorell. Some of the authors that support the theory of single authorship are Riquer (1990), Chiner (1993) and Badia (1993), while some of those supporting the double authorship are Riquer (1947), Coromines (1956) and Ferrando (1995). For an overview of this debate, see Riquer (1990). Neither of the two candidate authors left any text comparable to the one under study, and therefore discriminant analysis can not be used to help classify chapters by author. By using sample texts encompassing about ten percent of the book, and looking at word length and at the use of 44 conjunctions, prepositions and articles, Ginebra and Cabos (1998) detect heterogeneities that might indicate the existence of two authors. By analyzing the diversity of the vocabulary, Riba and Ginebra (2000) estimates that stylistic boundary to be near chapter 383. Following the lead of the extensive literature, this paper looks into word length, the use of the most frequent words and into the use of vowels in each chapter of the book. Given that the features selected are categorical, that leads to three contingency tables of ordered rows and therefore to three sequences of multinomial observations. Section 2 explores these sequences graphically, observing a clear shift in their distribution. Section 3 describes the problem of the estimation of a suden change-point in those sequences, in the following sections we propose various ways to estimate change-points in multinomial sequences; the method in section 4 involves fitting models for polytomous data, the one in Section 5 fits gamma models onto the sequence of Chi-square distances between each row profiles and the average profile, the one in Section 6 fits models onto the sequence of values taken by the first component of the correspondence analysis as well as onto sequences of other summary measures like the average word length. In Section 7 we fit models onto the marginal binomial sequences to identify the features that distinguish the chapters before and after that boundary. Most methods rely heavily on the use of generalized linear models