962 resultados para Medicine Research Statistical methods
Sales tax enforcement: An empirical analysis of compliance enforcement methodologies and pathologies
Resumo:
Most research on tax evasion has focused on the income tax. Sales tax evasion has been largely ignored and dismissed as immaterial. This paper explored the differences between income tax and sales tax evasion and demonstrated that sales tax enforcement is deserving of and requires the use of different tools to achieve compliance. Specifically, the major enforcement problem with sales tax is not evasion: it is theft perpetrated by companies that act as collection agents for the state. Companies engage in a principal-agent relationship with the state and many retain funds collected as an agent of the state for private use. As such, the act of sales tax theft bears more resemblance to embezzlement than to income tax evasion. It has long been assumed that the sales tax is nearly evasion free, and state revenue departments report voluntary compliance in a manner that perpetuates this myth. Current sales tax compliance enforcement methodologies are similar in form to income tax compliance enforcement methodologies and are based largely on trust. The primary focus is on delinquent filers with a very small percentage of businesses subject to audit. As a result, there is a very large group of noncompliant businesses who file on time and fly below the radar while stealing millions of taxpayer dollars. ^ The author utilized a variety of statistical methods with actual field data derived from operations of the Southern Region Criminal Investigations Unit of the Florida Department of Revenue to evaluate current and proposed sales tax compliance enforcement methodologies in a quasi-experimental, time series research design and to set forth a typology of sales tax evaders. This study showed that current estimates of voluntary compliance in sales tax systems are seriously and significantly overstated and that current enforcement methodologies are inadequate to identify the majority of violators and enforce compliance. Sales tax evasion is modeled using the theory of planned behavior and Cressey’s fraud triangle and it is demonstrated that proactive enforcement activities, characterized by substantial contact with non-delinquent taxpayers, results in superior ability to identify noncompliance and provides a structure through which noncompliant businesses can be rehabilitated.^
Resumo:
In the wake of the “9-11” terrorists' attacks, the U.S. Government has turned to information technology (IT) to address a lack of information sharing among law enforcement agencies. This research determined if and how information-sharing technology helps law enforcement by examining the differences in perception of the value of IT between law enforcement officers who have access to automated regional information sharing and those who do not. It also examined the effect of potential intervening variables such as user characteristics, training, and experience, on the officers' evaluation of IT. The sample was limited to 588 officers from two sheriff's offices; one of them (the study group) uses information sharing technology, the other (the comparison group) does not. Triangulated methodologies included surveys, interviews, direct observation, and a review of agency records. Data analysis involved the following statistical methods: descriptive statistics, Chi-Square, factor analysis, principal component analysis, Cronbach's Alpha, Mann-Whitney tests, analysis of variance (ANOVA), and Scheffe' post hoc analysis. ^ Results indicated a significant difference between groups: the study group perceived information sharing technology as being a greater factor in solving crime and in increasing officer productivity. The study group was more satisfied with the data available to it. As to the number of arrests made, information sharing technology did not make a difference. Analysis of the potential intervening variables revealed several remarkable results. The presence of a strong performance management imperative (in the comparison sheriff's office) appeared to be a factor in case clearances and arrests, technology notwithstanding. As to the influence of user characteristics, level of education did not influence a user's satisfaction with technology, but user-satisfaction scores differed significantly among years of experience as a law enforcement officer and the amount of computer training, suggesting a significant but weak relationship. ^ Therefore, this study finds that information sharing technology assists law enforcement officers in doing their jobs. It also suggests that other variables such as computer training, experience, and management climate should be accounted for when assessing the impact of information technology. ^
Resumo:
This research aimed to analyse the effect of different territorial divisions in the random fluctuation of socio-economic indicators related to social determinants of health. This is an ecological study resulting from a combination of statistical methods including individuated and aggregate data analysis, using five databases derived from the database of the Brazilian demographic census 2010: overall results of the sample by weighting area. These data were grouped into the following levels: households; weighting areas; cities; Immediate Urban Associated Regions and Intermediate Urban Associated Regions. A theoretical model related to social determinants of health was used, with the dependent variable Household with death and as independent variables: Black race; Income; Childcare and school no attendance; Illiteracy; and Low schooling. The data was analysed in a model related to social determinants of health, using Poisson regression in individual basis, multilevel Poisson regression and multiple linear regression in light of the theoretical framework of the area. It was identified a greater proportion of households with deaths among those with at least one black resident, lower-income, illiterate, who do not attend or attended school or day-care and less educated. The analysis of the adjusted model showed that most adjusted prevalence ratio was related to Income, where there is a risk value of 1.33 for households with at least one resident with lower average personal income to R$ 655,00 (Brazilian current). The multilevel analysis demonstrated that there was a context effect when the variables were subjected to the effects of areas, insofar as the random effects were significant for all models and with different prevalence rates being higher in the areas with smaller dimensions - Weighting areas with coefficient of 0.035 and Cities with coefficient of 0.024. The ecological analyses have shown that the variable Income and Low schooling presented explanatory potential for the outcome on all models, having income greater power to determine the household deaths, especially in models related to Immediate Urban Associated Regions with a standardized coefficient of -0.616 and regions intermediate urban associated regions with a standardized coefficient of -0.618. It was concluded that there was a context effect on the random fluctuation of the socioeconomic indicators related to social determinants of health. This effect was explained by the characteristics of territorial divisions and individuals who live or work there. Context effects were better identified in the areas with smaller dimensions, which are more favourable to explain phenomena related to social determinants of health, especially in studies of societies marked by social inequalities. The composition effects were better identified in the Regions of Urban Articulation, shaped through mechanisms similar to the phenomenon under study.
Resumo:
This study subdivides the Weddell Sea, Antarctica, into seafloor regions using multivariate statistical methods. These regions are categories used for comparing, contrasting and quantifying biogeochemical processes and biodiversity between ocean regions geographically but also regions under development within the scope of global change. The division obtained is characterized by the dominating components and interpreted in terms of ruling environmental conditions. The analysis uses 28 environmental variables for the sea surface, 25 variables for the seabed and 9 variables for the analysis between surface and bottom variables. The data were taken during the years 1983-2013. Some data were interpolated. The statistical errors of several interpolation methods (e.g. IDW, Indicator, Ordinary and Co-Kriging) with changing settings have been compared for the identification of the most reasonable method. The multivariate mathematical procedures used are regionalized classification via k means cluster analysis, canonical-correlation analysis and multidimensional scaling. Canonical-correlation analysis identifies the influencing factors in the different parts of the cove. Several methods for the identification of the optimum number of clusters have been tested. For the seabed 8 and 12 clusters were identified as reasonable numbers for clustering the Weddell Sea. For the sea surface the numbers 8 and 13 and for the top/bottom analysis 8 and 3 were identified, respectively. Additionally, the results of 20 clusters are presented for the three alternatives offering the first small scale environmental regionalization of the Weddell Sea. Especially the results of 12 clusters identify marine-influenced regions which can be clearly separated from those determined by the geological catchment area and the ones dominated by river discharge.
Resumo:
This dissertation investigates, based on the Post-Keynesian theory and on its concept of monetary economy of production, the exchange rate behavior of the Brazilian Real in the presence of Brazilian Central Bank's interventions by means of the so-called swap transactions over 2002-2015. Initially, the work analyzes the essential properties of an open monetary economy of production and, thereafter, it presents the basic propositions of the Post-Keynesian view on the exchange rate determination, highlighting the properties of foreign exchange markets and the peculiarities of the Brazilian position into the international monetary and financial system. The research, thereby, accounts for the various segments of the Brazilian foreign exchange market. To accomplish its purpose, we first do a literature review of the Post-Keynesian literature about the topic. Then, we undertake empirical exams of the exchange rate determination using two statistical methods. On the one hand, to measure the volatility of exchange rate, we estimate Auto-regressive Conditional Heteroscedastic (ARCH) and Generalized Auto-regressive Conditional Heteroscedastic (GARCH) models. On the other hand, to measure the variance of the exchange rate in relation to real, financial variables, and the swaps, we estimate a Vector Auto-regression (VAR) model. Both experiments are performed for the nominal and real effective exchange rates. The results show that the swaps respond to exchange rate movements, trying to offset its volatility. This reveals that the exchange rate is, at least in a certain magnitude, sensitive to swaps transactions conducted by the Central Bank. In addition, another empirical result is that the real effective exchange rate responds more to the swaps auctions than the nominal rate.
Resumo:
We would like to thank all NHS Consultant Colleagues at Aberdeen Royal Infirmary for their help with prompt recruitment of these patients (Dr M Metcalfe, Dr AD Stewart, Dr A Hannah, Dr A Noman, Dr P Broadhurst, Dr D Hogg, Dr D Garg) and to Dr Gordon Prescott for help and advice with the Statistical Methods.
Resumo:
We examined facilitators and barriers to adoption of genomic services for colorectal care, one of the first genomic medicine applications, within the Veterans Health Administration to shed light on areas for practice change. We conducted semi-structured interviews with 58 clinicians to understand use of the following genomic services for colorectal care: family health history documentation, molecular and genetic testing, and genetic counseling. Data collection and analysis were informed by two conceptual frameworks, the Greenhalgh Diffusion of Innovation and Andersen Behavioral Model, to allow for concurrent examination of both access and innovation factors. Specialists were more likely than primary care clinicians to obtain family history to investigate hereditary colorectal cancer (CRC), but with limited detail; clinicians suggested templates to facilitate retrieval and documentation of family history according to guidelines. Clinicians identified advantage of molecular tumor analysis prior to genetic testing, but tumor testing was infrequently used due to perceived low disease burden. Support from genetic counselors was regarded as facilitative for considering hereditary basis of CRC diagnosis, but there was variability in awareness of and access to this expertise. Our data suggest the need for tools and policies to establish and disseminate well-defined processes for accessing services and adhering to guidelines.
Resumo:
The dissertation consists of three chapters related to the low-price guarantee marketing strategy and energy efficiency analysis. The low-price guarantee is a marketing strategy in which firms promise to charge consumers the lowest price among their competitors. Chapter 1 addresses the research question "Does a Low-Price Guarantee Induce Lower Prices'' by looking into the retail gasoline industry in Quebec where there was a major branded firm which started a low-price guarantee back in 1996. Chapter 2 does a consumer welfare analysis of low-price guarantees to drive police indications and offers a new explanation of the firms' incentives to adopt a low-price guarantee. Chapter 3 develops the energy performance indicators (EPIs) to measure energy efficiency of the manufacturing plants in pulp, paper and paperboard industry.
Chapter 1 revisits the traditional view that a low-price guarantee results in higher prices by facilitating collusion. Using accurate market definitions and station-level data from the retail gasoline industry in Quebec, I conducted a descriptive analysis based on stations and price zones to compare the price and sales movement before and after the guarantee was adopted. I find that, contrary to the traditional view, the stores that offered the guarantee significantly decreased their prices and increased their sales. I also build a difference-in-difference model to quantify the decrease in posted price of the stores that offered the guarantee to be 0.7 cents per liter. While this change is significant, I do not find the response in comeptitors' prices to be significant. The sales of the stores that offered the guarantee increased significantly while the competitors' sales decreased significantly. However, the significance vanishes if I use the station clustered standard errors. Comparing my observations and the predictions of different theories of modeling low-price guarantees, I conclude the empirical evidence here supports that the low-price guarantee is a simple commitment device and induces lower prices.
Chapter 2 conducts a consumer welfare analysis of low-price guarantees to address the antitrust concerns and potential regulations from the government; explains the firms' potential incentives to adopt a low-price guarantee. Using station-level data from the retail gasoline industry in Quebec, I estimated consumers' demand of gasoline by a structural model with spatial competition incorporating the low-price guarantee as a commitment device, which allows firms to pre-commit to charge the lowest price among their competitors. The counterfactual analysis under the Bertrand competition setting shows that the stores that offered the guarantee attracted a lot more consumers and decreased their posted price by 0.6 cents per liter. Although the matching stores suffered a decrease in profits from gasoline sales, they are incentivized to adopt the low-price guarantee to attract more consumers to visit the store likely increasing profits at attached convenience stores. Firms have strong incentives to adopt a low-price guarantee on the product that their consumers are most price-sensitive about, while earning a profit from the products that are not covered in the guarantee. I estimate that consumers earn about 0.3% more surplus when the low-price guarantee is in place, which suggests that the authorities should not be concerned and regulate low-price guarantees. In Appendix B, I also propose an empirical model to look into how low-price guarantees would change consumer search behavior and whether consumer search plays an important role in estimating consumer surplus accurately.
Chapter 3, joint with Gale Boyd, describes work with the pulp, paper, and paperboard (PP&PB) industry to provide a plant-level indicator of energy efficiency for facilities that produce various types of paper products in the United States. Organizations that implement strategic energy management programs undertake a set of activities that, if carried out properly, have the potential to deliver sustained energy savings. Energy performance benchmarking is a key activity of strategic energy management and one way to enable companies to set energy efficiency targets for manufacturing facilities. The opportunity to assess plant energy performance through a comparison with similar plants in its industry is a highly desirable and strategic method of benchmarking for industrial energy managers. However, access to energy performance data for conducting industry benchmarking is usually unavailable to most industrial energy managers. The U.S. Environmental Protection Agency (EPA), through its ENERGY STAR program, seeks to overcome this barrier through the development of manufacturing sector-based plant energy performance indicators (EPIs) that encourage U.S. industries to use energy more efficiently. In the development of the energy performance indicator tools, consideration is given to the role that performance-based indicators play in motivating change; the steps necessary for indicator development, from interacting with an industry in securing adequate data for the indicator; and actual application and use of an indicator when complete. How indicators are employed in EPA’s efforts to encourage industries to voluntarily improve their use of energy is discussed as well. The chapter describes the data and statistical methods used to construct the EPI for plants within selected segments of the pulp, paper, and paperboard industry: specifically pulp mills and integrated paper & paperboard mills. The individual equations are presented, as are the instructions for using those equations as implemented in an associated Microsoft Excel-based spreadsheet tool.
Resumo:
El presente trabajo incluye el estudio de un amplio conjunto cerámico perteneciente al yacimiento arqueológico de la Edad del Cobre y Edad del Bronce de Castillejo del Bonete. La muestra fue recuperada de distintas áreas del asentamiento durante la campaña de excavación de 2012. La investigación ha tenido como fin conocer mejor la relación forma-función de las vasijas, su proceso de fabricación, así como el modelo productivo y las posibles manifestaciones simbólicas presentes en el repertorio analizado. La metodología utilizada para cumplir con los objetivos se ha basado en la recopilación de datos, considerando una serie de variables morfológicas y tecnológicas, y su procesamiento con el empleo de técnicas estadísticas sencillas.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
La stratégie actuelle de contrôle de la qualité de l’anode est inadéquate pour détecter les anodes défectueuses avant qu’elles ne soient installées dans les cuves d’électrolyse. Des travaux antérieurs ont porté sur la modélisation du procédé de fabrication des anodes afin de prédire leurs propriétés directement après la cuisson en utilisant des méthodes statistiques multivariées. La stratégie de carottage des anodes utilisée à l’usine partenaire fait en sorte que ce modèle ne peut être utilisé que pour prédire les propriétés des anodes cuites aux positions les plus chaudes et les plus froides du four à cuire. Le travail actuel propose une stratégie pour considérer l’histoire thermique des anodes cuites à n’importe quelle position et permettre de prédire leurs propriétés. Il est montré qu’en combinant des variables binaires pour définir l’alvéole et la position de cuisson avec les données routinières mesurées sur le four à cuire, les profils de température des anodes cuites à différentes positions peuvent être prédits. Également, ces données ont été incluses dans le modèle pour la prédiction des propriétés des anodes. Les résultats de prédiction ont été validés en effectuant du carottage supplémentaire et les performances du modèle sont concluantes pour la densité apparente et réelle, la force de compression, la réactivité à l’air et le Lc et ce peu importe la position de cuisson.
Resumo:
Lääkehoidon turvallinen toteuttaminen edellyttää sairaanhoitajalta hyvää lääkehoidon osaamisperustaa. Sairaanhoitajakoulutuksen tehtävänä on mahdollistaa tämän osaamisen kehittyminen. Kansainvälisissä tutkimuksissa on kuitenkin osoitettu, että lääkehoidon opetuksen laajuudessa, sisällössä ja toteutuksessa on vaihtelevuutta. Aikaisemmissa tutkimuksissa on raportoitu myös puutteita lääkehoidon osaamisessa sekä sairaanhoitajilla että sairaanhoitajaopiskelijoilla. Koulutuksen ja lääkehoidon osaamisen kehittämiseksi lääkehoidon opetuksen ja sairaanhoitajaopiskelijoiden lääkehoidon osaamisen monipuolinen arviointi ja osaamista selittävien tekijöiden tarkastelu on tarpeen. Tämän tutkimuksen tarkoituksena oli i) arvioida lääkehoidon opetusta suomalaisessa sairaanhoitajakoulutuksessa, ii) arvioida sairaanhoitajaopiskelijoiden lääkehoidon osaamista sekä iii) tunnistaa sairaanhoitajaopiskelijan lääkehoidon osaamiseen yhteydessä olevat tekijät. Tutkimus toteutettiin kolmessa vaiheessa. Ensimmäisessä vaiheessa kahden integroidun kirjallisuuskatsauksen kautta määriteltiin tutkimuksen kohteena oleva sairaanhoitajan lääkehoidon osaaminen ja aiemmin tunnistetut sairaanhoitajaopiskelijan lääkehoidon osaamiseen yhteydessä olevat tekijät. Toisessa vaiheessa toteutettiin valtakunnallinen lääkehoidon opetukseen liittyvä kysely hoitotyön koulutusohjelmasta vastaaville koulutuspäälliköille (n=22) ja opettajille (n=136). Tutkimuksen kolmannessa vaiheessa opintojensa alku‐ (n=328) ja loppuvaiheessa olevien sairaanhoitajaopiskelijoiden (n=338) lääkehoidon osaaminen arvioitiin ja osaamiseen yhteydessä olevat tekijät tunnistettiin. Aineistojen analyysissä käytettiin pääosin tilastollisia menetelmiä. Tulosten perusteella lääkehoidon opetuksen laajuus vaihteli eri ammattikorkeakouluissa, mutta opetuksen sisältö oli kuitenkin monipuolista. Lisää huomiota tulisi kiinnittää lääkehoidon teoreettiseen perustaan ja itsehoitoon sekä lääkehoidon ohjaukseen liittyviin sisältöalueisiin. Opiskelijoiden lääkehoidon osaamista arvioitiin säännöllisesti kaikissa ammattikorkeakouluissa. Sairaanhoitajaopiskelijan lääkehoidon osaamista arvioitiin tutkimuksessa tietotestillä, lääkelaskentatehtävillä ja lyhyiden potilastapausten ratkaisemisen avulla. Lääkehoidon osaamiseen yhteydessä olevia tekijöitä tarkasteltiin kolmesta näkökulmasta: 1) yksilölliset tekijät, 2) kliiniseen oppimisympäristöön ja 3) ammattikorkeakouluun liittyvät tekijät. Lääkehoidon teoreettista osaamista arvioivassa tietotestissä opiskelijat vastasivat keskimäärin 72 prosenttiin kysymyksistä täysin oikein; lääkelaskuista täysin oikein oli 74 % ja potilastapauksissa 57 % valitsi parhaan mahdollisen toimintatavan. Tulosten perusteella sairaanhoitajaopiskelijan osaamista selittivät eniten yksilölliset tekijät. Lääkehoidon osaamiseen yhteydessä olevien tekijöiden välillä oli eroa opintojen alussa ja lopussa. Opintojen alkuvaiheessa opiskelijan aikaisempi opintomenestys oli yhteydessä lääkehoidon osaamiseen, kun taas opintojen loppuvaiheessa siihen olivat yhteydessä opiskelijan kyky itseohjautuvaan oppimiseen sekä opiskelumotivaatio. Johtopäätöksenä voidaan todeta tutkimuksen tulosten olevan samansuuntaisia kuin aikaisemmissa tutkimuksissa. Lääkehoidon opetuksen laajuus vaihtelee opetussuunnitelmatasolla, mutta täsmällinen arviointi on vaikeaa opetuksen sisältöjen integroimisen takia. Sairaanhoitajaopiskelijoiden lääkehoidon osaaminen oli hieman parempaa kuin aikaisemmissa tutkimuksissa, mutta osaamisessa on edelleen puutteita. Lääkehoidon opetuksen ja osaamisen kehittäminen edellyttää kansallista ja kansainvälistä tutkimus‐ ja kehittämisyhteistyötä. Tutkimuksen tulokset tukevat lääkehoidon opetuksen sekä osaamisen tutkimusta ja kehittämistä.
Resumo:
The main aim of this study was to evaluate the impact of the urban pollution plume from the city of Manaus by emissions from mobile and stationary sources in the atmospheric pollutants concentrations of the Amazon region, by using The Weather Research and Forecasting with Chemistry (WRF-Chem) model. The air pollutants analyzed were CO, NOx, SO2, O3, PM2.5, PM10 and VOCs. The model simulations have been configured with a grid spacing of 3 km, with 190 x and 136 y grid points in horizontal spacing, centered in the city of Manaus during the period of 17 and 18 of March 2014. The anthropogenic emissions inventories have gathered from mobile sources that were estimated the emissions of light and heavy-duty vehicles classes. In addition, the stationary sources have considered the thermal power plants by the type of energy sources used in the region as well as the emissions from the refinery located in Manaus. Various scenarios have been defined with numerical experiments that considered only emissions by biogenic, mobile and stationary sources, and replacement fuel from thermal power plant, along with a future scenario consisting with twice as much anthropogenic emissions. A qualitative assessment of simulation with base scenario has also been carried out, which represents the conditions of the region in its current state, where several statistical methods were used in order to compare the results of air pollutants and meteorological fields with observed ground-based data located in various points in the study grid. The qualitative analysis showed that the model represents satisfactorily the variables analyzed from the point of view of the adopted parameters. Regarding the simulations, defined from the base scenarios, the numerical experiments indicate relevant results such as: it was found that the stationary sources scenario, where the thermal power plants are predominant, resulted in the highest concentrations, for all air pollutants evaluated, except for carbon monoxide when compared to the vehicle emissions scenario; The replacement of the energy matrix of current thermal power plants for natural gas have showed significant reductions in pollutants analyzed, for instance, 63% reductions of NOx in the contribution of average concentration in the study grid; A significant increase in the concentrations of chemical species was observed in a futuristic scenario, reaching up to a 81% increase in peak concentrations of SO2 in the study area. The spatial distributions of the scenarios have showed that the air pollution plume from Manaus is predominantly west and southwest, where it can reach hundreds of kilometers to areas dominated by original soil covering.
Resumo:
The thesis is an investigation of the principle of least effort (Zipf 1949 [1972]). The principle is simple (all effort should be least) and universal (it governs the totality of human behavior). Since the principle is also functional, the thesis adopts a functional theory of language as its theoretical framework, i.e. Natural Linguistics. The explanatory system of Natural Linguistics posits that higher principles govern preferences, which, in turn, manifest themselves as concrete, specific processes in a given language. Therefore, the thesis’ aim is to investigate the principle of least effort on the basis of external evidence from English. The investigation falls into the three following strands: the investigation of the principle itself, the investigation of its application in articulatory effort and the investigation of its application in phonological processes. The structure of the thesis reflects the division of its broad aims. The first part of the thesis presents its theoretical background (Chapter One and Chapter Two), the second part of the thesis deals with application of least effort in articulatory effort (Chapter Three and Chapter Four), whereas the third part discusses the principle of least effort in phonological processes (Chapter Five and Chapter Six). Chapter One serves as an introduction, examining various aspects of the principle of least effort such as its history, literature, operation and motivation. It overviews various names which denote least effort, explains the origins of the principle and reviews the literature devoted to the principle of least effort in a chronological order. The chapter also discusses the nature and operation of the principle, providing numerous examples of the principle at work. It emphasizes the universal character of the principle from the linguistic field (low-level phonetic processes and language universals) and the non-linguistic ones (physics, biology, psychology and cognitive sciences), proving that the principle governs human behavior and choices. Chapter Two provides the theoretical background of the thesis in terms of its theoretical framework and discusses the terms used in the thesis’ title, i.e. hierarchy and preference. It justifies the selection of Natural Linguistics as the thesis’ theoretical framework by outlining its major assumptions and demonstrating its explanatory power. As far as the concepts of hierarchy and preference are concerned, the chapter provides their definitions and reviews their various understandings via decision theories and linguistic preference-based theories. Since the thesis investigates the principle of least effort in language and speech, Chapter Three considers the articulatory aspect of effort. It reviews the notion of easy and difficult sounds and discusses the concept of articulatory effort, overviewing its literature as well as various understandings in a chronological fashion. The chapter also presents the concept of articulatory gestures within the framework of Articulatory Phonology. The thesis’ aim is to investigate the principle of least effort on the basis of external evidence, therefore Chapters Four and Six provide evidence in terms of three experiments, text message studies (Chapter Four) and phonological processes in English (Chapter Six). Chapter Four contains evidence for the principle of least effort in articulation on the basis of experiments. It describes the experiments in terms of their predictions and methodology. In particular, it discusses the adopted measure of effort established by means of the effort parameters as well as their status. The statistical methods of the experiments are also clarified. The chapter reports on the results of the experiments, presenting them in a graphical way and discusses their relation to the tested predictions. Chapter Four establishes a hierarchy of speakers’ preferences with reference to articulatory effort (Figures 30, 31). The thesis investigates the principle of least effort in phonological processes, thus Chapter Five is devoted to the discussion of phonological processes in Natural Phonology. The chapter explains the general nature and motivation of processes as well as the development of processes in child language. It also discusses the organization of processes in terms of their typology as well as the order in which processes apply. The chapter characterizes the semantic properties of processes and overviews Luschützky’s (1997) contribution to NP with respect to processes in terms of their typology and incorporation of articulatory gestures in the concept of a process. Chapter Six investigates phonological processes. In particular, it identifies the issues of lenition/fortition definition and process typology by presenting the current approaches to process definitions and their typology. Since the chapter concludes that no coherent definition of lenition/fortition exists, it develops alternative lenition/fortition definitions. The chapter also revises the typology of phonological processes under effort management, which is an extended version of the principle of least effort. Chapter Seven concludes the thesis with a list of the concepts discussed in the thesis, enumerates the proposals made by the thesis in discussing the concepts and presents some questions for future research which have emerged in the course of investigation. The chapter also specifies the extent to which the investigation of the principle of least effort is a meaningful contribution to phonology.
Resumo:
Currently, the decision analysis in production processes involves a level of detail, in which the problem is subdivided to analyze it in terms of different and conflicting points of view. The multi-criteria analysis has been an important tool that helps assertive decisions related to the production process. This process of analysis has been incorporated into various areas of production engineering, by applying multi-criteria methods in solving the problems of the productive sector. This research presents a statistical study on the use of multi-criteria methods in the areas of Production Engineering, where 935 papers were filtered from 20.663 publications in scientific journals, considering a level of the publication quality based on the impact factor published by the JCR between 2010 and 2015. In this work, the descriptive statistics is used to represent some information and statistical analysis on the volume of applications methods. Relevant results were found with respect to the "amount of advanced methods that are being applied and in which areas related to Production Engineering." This information may provide support to researchers when preparing a multi-criteria application, whereupon it will be possible to check in which issues and how often the other authors have used multi-criteria methods.