52 resultados para 350107 Other Accounting
Resumo:
Body: The foundation for the formation of the knowledge and conception of gender identity among the transgendered The purpose of this study is to increase the understanding of the experiential formation of the knowledge and conception of one's gender and the foundation of that experience. This study is based on qualitative method and phenomenological approach. The research material consists: Herculine Barbin's Herculine Barbin, Christine Jorgensen's Christine Jorgensen. A Personal Autobiography Kate Bornstein's Gender Outlaw and Deirdre McCloskey's Crossing. A Memoir. The theoretical frame of reference for the study is Michel Henry's phenomenology of the body. The most important relations regarding the formation of the knowledge and conception of gender identity at which the sensing of the body is directed are human being's own subjective, organic and objective bodily form and other people and representatives of institutions. The concept of resistance reveals that gender division and the stereotypes and accountability related to it have dual character in culture. As a resistance they contain the potential for triggering the reflections about one's own gender. As an instrument they may function as means of exercising power and, as such, of monitoring gender normality. According to the research material the sources for the knowledge and conception of gender identity among the transgendered are literature, medical articles and books, internet, clerical and medical professionals, friends and relatives, and the peer group, that is, other transgendered. The transgendered are not only users of gender knowledge, but many of them are also active producers and contributors of gender knowledge and especially of knowledge about transgenderness. The problem is that this knowledge is unevenly distributed in society. The users of gender knowledge are mainly the transgendered, researchers of different disciplines specialized in gender issues, and medical and healthcare professionals specialized in gender adjustments. Therefore not everyone has the sufficient knowledge to support one's own or someone other's life as a gendered being in a society and ability to achieve gender autonomy. The quality of this knowledge is also rather narrow from the gender multiplicity point of view. The feeling of strangeness and the resulting experience of enstrangement have, like stereotypes, dual character in culture. They may be the reason for people's social disadvantage or exclusion, but the experiences may just as well be a resource for people's gender maturity and culture. As a cultural resource in gender issue this would mean innovativity in creating, upholding and changing cultural gender division, stereotypes and accounting customs. A transgendered may then become a liminal that aspires to change the limits related to resistances in society. Transgenderness is not only a medical issue but, first and foremost, an issue bearing upon human situation as a whole, or, in other words, related to the art of life. The subject of gender adjustment treatments is not only gender itself but the art of life as a gendered being. Transgenderness would then require multi-discipline co-operation.
Resumo:
Agriculture is an economic activity that heavily relies on the availability of natural resources. Through its role in food production agriculture is a major factor affecting public welfare and health, and its indirect contribution to gross domestic product and employment is significant. Agriculture also contributes to numerous ecosystem services through management of rural areas. However, the environmental impact of agriculture is considerable and reaches far beyond the agroecosystems. The questions related to farming for food production are, thus, manifold and of great public concern. Improving environmental performance of agriculture and sustainability of food production, sustainabilizing food production, calls for application of wide range of expertise knowledge. This study falls within the field of agro-ecology, with interphases to food systems and sustainability research and exploits the methods typical of industrial ecology. The research in these fields extends from multidisciplinary to interdisciplinary and transdisciplinary, a holistic approach being the key tenet. The methods of industrial ecology have been applied extensively to explore the interaction between human economic activity and resource use. Specifically, the material flow approach (MFA) has established its position through application of systematic environmental and economic accounting statistics. However, very few studies have applied MFA specifically to agriculture. The MFA approach was used in this thesis in such a context in Finland. The focus of this study is the ecological sustainability of primary production. The aim was to explore the possibilities of assessing ecological sustainability of agriculture by using two different approaches. In the first approach the MFA-methods from industrial ecology were applied to agriculture, whereas the other is based on the food consumption scenarios. The two approaches were used in order to capture some of the impacts of dietary changes and of changes in production mode on the environment. The methods were applied at levels ranging from national to sector and local levels. Through the supply-demand approach, the viewpoint changed between that of food production to that of food consumption. The main data sources were official statistics complemented with published research results and expertise appraisals. MFA approach was used to define the system boundaries, to quantify the material flows and to construct eco-efficiency indicators for agriculture. The results were further elaborated for an input-output model that was used to analyse the food flux in Finland and to determine its relationship to the economy-wide physical and monetary flows. The methods based on food consumption scenarios were applied at regional and local level for assessing feasibility and environmental impacts of relocalising food production. The approach was also used for quantification and source allocation of greenhouse gas (GHG) emissions of primary production. GHG assessment provided, thus, a means of crosschecking the results obtained by using the two different approaches. MFA data as such or expressed as eco-efficiency indicators, are useful in describing the overall development. However, the data are not sufficiently detailed for identifying the hot spots of environmental sustainability. Eco-efficiency indicators should not be bluntly used in environmental assessment: the carrying capacity of the nature, the potential exhaustion of non-renewable natural resources and the possible rebound effect need also to be accounted for when striving towards improved eco-efficiency. The input-output model is suitable for nationwide economy analyses and it shows the distribution of monetary and material flows among the various sectors. Environmental impact can be captured only at a very general level in terms of total material requirement, gaseous emissions, energy consumption and agricultural land use. Improving environmental performance of food production requires more detailed and more local information. The approach based on food consumption scenarios can be applied at regional or local scales. Based on various diet options the method accounts for the feasibility of re-localising food production and environmental impacts of such re-localisation in terms of nutrient balances, gaseous emissions, agricultural energy consumption, agricultural land use and diversity of crop cultivation. The approach is applicable anywhere, but the calculation parameters need to be adjusted so as to comply with the specific circumstances. The food consumption scenario approach, thus, pays attention to the variability of production circumstances, and may provide some environmental information that is locally relevant. The approaches based on the input-output model and on food consumption scenarios represent small steps towards more holistic systemic thinking. However, neither one alone nor the two together provide sufficient information for sustainabilizing food production. Environmental performance of food production should be assessed together with the other criteria of sustainable food provisioning. This requires evaluation and integration of research results from many different disciplines in the context of a specified geographic area. Foodshed area that comprises both the rural hinterlands of food production and the population centres of food consumption is suggested to represent a suitable areal extent for such research. Finding a balance between the various aspects of sustainability is a matter of optimal trade-off. The balance cannot be universally determined, but the assessment methods and the actual measures depend on what the bottlenecks of sustainability are in the area concerned. These have to be agreed upon among the actors of the area
Resumo:
The study examines the personnel training and research activities carried out by the Organization and Methods Division of the Ministry of Finance and their becoming a part and parcel of the state administration in 1943-1971. The study is a combination of institutional and ideological historical research in recent history on adult education, using a constructionist approach. Material salient to the study comes from the files of the Organization and Methods Division in the National Archives, parliamentary documents, committee reports, and the magazines. The concentrated training and research activities arranged by the Organization and Methods Division, became a part and parcel of the state administration in the midst of controversial challenges and opportunities. They served to solve social problems which beset the state administration as well as contextual challenges besetting rationalization measures, and organizational challenges. The activities were also affected by a dependence on decision-makers, administrative units, and civil servants organizations, by different views on rationalization and the holistic nature of reforms, as well as by the formal theories that served as resources. It chose long-term projects which extended to the political decision-makers and administrative units turf, and which were intended to reform the structures of the state administration and to rationalize the practices of the administrative units. The crucial questions emerged in opposite pairs (a constitutional state vs. the ideology of an administratively governed state, a system of national boards vs. a system of government through ministries, efficiency of work vs. pleasantness of work, centralized vs. decentralized rationalization activities) which were not solvable problems but impossible questions with no ultimate answers. The aim and intent of the rationalization of the state administration (the reform of the central, provincial, and local governments) was to facilitate integrated management and to render a greater amount of work by approaching management procedures scientifically and by clarifying administrative instances and their respon-sibilities in regards to each other. The means resorted to were organizational studies and committee work. In the rationalization of office work and finance control, the idea was to effect savings in administrative costs and to pare down those costs as well as to rationalize and heighten those functions by developing the institution of work study practitioners in order to coordinate employer and employee relationships and benefits (the training of work study practitioners, work study, and a two-tier work study practitioner organization). A major part of the training meant teaching and implementing leadership skills in practice, which, in turn, meant that the learning environment was the genuine work community and efforts to change it. In office rationalization, the solution to regulate the relations between the employer and the employees was the co-existence of the technical and biological rationalization and the human resource administration and the accounting and planning systems at the turn of the 1960s and 1970s. The former were based on the school of scientific management and human relations, the latter on system thinking, which was a combination of the former two. In the rationalization of the state administration, efforts were made to find solutions to stabilize management ideologies and to arrange the relationships of administrative systems in administrative science - among other things, in the Hoover Committee and the Simon decision making theory, and, in the 1960s, in system thinking. Despite the development-related vocabulary, the practical work was advanced rationalization. It was said that the practical activities of both the state administration and the administrative units depended on professional managers who saw to production results and human relations. The pedagogic experts hired to develop training came up with a training system, based on the training-technological model where the training was made a function of its own. The State Training Center was established and the training office of the Organization and Methods Division became the leader and coordinator of personnel training.
Resumo:
Detecting Earnings Management Using Neural Networks. Trying to balance between relevant and reliable accounting data, generally accepted accounting principles (GAAP) allow, to some extent, the company management to use their judgment and to make subjective assessments when preparing financial statements. The opportunistic use of the discretion in financial reporting is called earnings management. There have been a considerable number of suggestions of methods for detecting accrual based earnings management. A majority of these methods are based on linear regression. The problem with using linear regression is that a linear relationship between the dependent variable and the independent variables must be assumed. However, previous research has shown that the relationship between accruals and some of the explanatory variables, such as company performance, is non-linear. An alternative to linear regression, which can handle non-linear relationships, is neural networks. The type of neural network used in this study is the feed-forward back-propagation neural network. Three neural network-based models are compared with four commonly used linear regression-based earnings management detection models. All seven models are based on the earnings management detection model presented by Jones (1991). The performance of the models is assessed in three steps. First, a random data set of companies is used. Second, the discretionary accruals from the random data set are ranked according to six different variables. The discretionary accruals in the highest and lowest quartiles for these six variables are then compared. Third, a data set containing simulated earnings management is used. Both expense and revenue manipulation ranging between -5% and 5% of lagged total assets is simulated. Furthermore, two neural network-based models and two linear regression-based models are used with a data set containing financial statement data from 110 failed companies. Overall, the results show that the linear regression-based models, except for the model using a piecewise linear approach, produce biased estimates of discretionary accruals. The neural network-based model with the original Jones model variables and the neural network-based model augmented with ROA as an independent variable, however, perform well in all three steps. Especially in the second step, where the highest and lowest quartiles of ranked discretionary accruals are examined, the neural network-based model augmented with ROA as an independent variable outperforms the other models.
Resumo:
As companies become more efficient with respect to their internal processes, they begin to shift the focus beyond their corporate boundaries. Thus, the recent years have witnessed an increased interest by practitioners and researchers in interorganizational collaboration, which promises better firm performance through more effective supply chain management. It is no coincidence that this interest comes in parallel with the recent advancements in Information and Communication Technologies, which offer many new collaboration possibilities for companies. However, collaboration, or any other type of supply chain integration effort, relies heavily on information sharing. Hence, this study focuses on information sharing, in particular on the factors that determine it and on its value. The empirical evidence from Finnish and Swedish companies suggests that uncertainty (both demand and environmental) and dependency in terms of switching costs and asset specific investments are significant determinants of information sharing. Results also indicate that information sharing improves company performance regarding resource usage, output, and flexibility. However, companies share information more intensely at the operational rather than the strategic level. The use of supply chain practices and technologies is substantial but varies across the two countries. This study sheds light on a common trend in supply chains today. Whereas the results confirm the value of information sharing, the contingent factors help to explain why the intensity of information shared across companies differ. In the future, competitive pressures and uncertainty are likely to intensify. Therefore, companies may want to continue with their integration efforts by focusing on the determinants discussed in this study. However, at the same time, the possibility of opportunistic behavior by the exchange partner cannot be disregarded.
Resumo:
I boken behandlas nya kommunikationstekniker och olika marknadsföringsmetoder som har uppkommit till följd av den tekniska utvecklingen. Marknadsföring via de nya kommunikationskanalerna har givit upphov till vissa olägenheter för mottagarna. Olägenheterna som mottagarna förorsakas vid marknadsföring via de nya teknikerna kan delas in i tre kategorier; 1) meddelandet förorsakar kostnader för mottagaren, 2) meddelandet hindrar mottagaren samt 3) meddelandet upplevs vara påträngande och utgöra ett intrång i mottagarens privatliv. Under de senaste åren har det förekommit hektiska lagstiftningsåtgärder på många håll i världen. Såväl enskilda stater, nationella myndigheter samt internationella organisationer har utarbetat regler om marknadsföring via de nya teknikerna. En av kärnfrågorna i regleringen är huruvida man skall ha ett system med ”opt-in” eller ”opt-out”. En opt-in-lösning innebär att marknadsföraren måste inhämta mottagarens samtycke på förhand, medan en opt-out-lösning innebär att det är tillåtet att sända marknadsföring om inte mottagaren motsatt sig detta. Enligt gällande lagstiftning faller marknadsföring via följande tre tekniker under opt-in: automatiserade uppringningssystem, telefax samt e-post. Det huvudsakliga syftet med avhandlingen är att utreda om nuvarande opt-in-lista är tillräckligt omfattande med beaktande av de olägenheter som marknadsföring via de nya teknikerna förorsakar, eller om den bör utvidgas. I genomgången av marknadsföring via nya tekniker har marknadsföring via bl.a. följande tekniker undersökts: Internet (www-sidor), reklamfönster, reklambanner, e-post, mobiltelefon, telefax, bloggar, RSS, Instant Messaging samt Internettelefoni. Vid sidan av nya tekniker har även vissa ”traditionella” marknadsföringsmetoder undersökts, i syfte att utreda huruvida även de bör falla in under en opt-in-lösning. De traditionella marknadsföringsmetoder som ingår i avhandlingen är hemförsäljning, telemarketing, TV- och radioreklam samt adresserad och oadresserad direktreklam. En annan central del i avhandlingen är frågan hur påföljdssystemet vid överträdelser av opt-in-bestämmelserna bör utformas. Hur skall de enskilda mottagarna få ersättning för de kostnader de åsamkats av marknadsföringen? Är det dags att införa straffskadestånd i Finland? Hög tid att även Finland får grupptalan? Kan tvisterna avgöras virtuellt via domstolar on-line eller ODR?
Resumo:
Recently, focus of real estate investment has expanded from the building-specific level to the aggregate portfolio level. The portfolio perspective requires investment analysis for real estate which is comparable with that of other asset classes, such as stocks and bonds. Thus, despite its distinctive features, such as heterogeneity, high unit value, illiquidity and the use of valuations to measure performance, real estate should not be considered in isolation. This means that techniques which are widely used for other assets classes can also be applied to real estate. An important part of investment strategies which support decisions on multi-asset portfolios is identifying the fundamentals of movements in property rents and returns, and predicting them on the basis of these fundamentals. The main objective of this thesis is to find the key drivers and the best methods for modelling and forecasting property rents and returns in markets which have experienced structural changes. The Finnish property market, which is a small European market with structural changes and limited property data, is used as a case study. The findings in the thesis show that is it possible to use modern econometric tools for modelling and forecasting property markets. The thesis consists of an introduction part and four essays. Essays 1 and 3 model Helsinki office rents and returns, and assess the suitability of alternative techniques for forecasting these series. Simple time series techniques are able to account for structural changes in the way markets operate, and thus provide the best forecasting tool. Theory-based econometric models, in particular error correction models, which are constrained by long-run information, are better for explaining past movements in rents and returns than for predicting their future movements. Essay 2 proceeds by examining the key drivers of rent movements for several property types in a number of Finnish property markets. The essay shows that commercial rents in local markets can be modelled using national macroeconomic variables and a panel approach. Finally, Essay 4 investigates whether forecasting models can be improved by accounting for asymmetric responses of office returns to the business cycle. The essay finds that the forecast performance of time series models can be improved by introducing asymmetries, and the improvement is sufficient to justify the extra computational time and effort associated with the application of these techniques.
Resumo:
A growing body of empirical research examines the structure and effectiveness of corporate governance systems around the world. An important insight from this literature is that corporate governance mechanisms address the excessive use of managerial discretionary powers to get private benefits by expropriating the value of shareholders. One possible way of expropriation is to reduce the quality of disclosed earnings by manipulating the financial statements. This lower quality of earnings should then be reflected by the stock price of firm according to value relevance theorem. Hence, instead of testing the direct effect of corporate governance on the firm’s market value, it is important to understand the causes of the lower quality of accounting earnings. This thesis contributes to the literature by increasing knowledge about the extent of the earnings management – measured as the extent of discretionary accruals in total disclosed earnings - and its determinants across the Transitional European countries. The thesis comprises of three essays of empirical analysis of which first two utilize the data of Russian listed firms whereas the third essay uses data from 10 European economies. More specifically, the first essay adds to existing research connecting earnings management to corporate governance. It testifies the impact of the Russian corporate governance reforms of 2002 on the quality of disclosed earnings in all publicly listed firms. This essay provides empirical evidence of the fact that the desired impact of reforms is not fully substantiated in Russia without proper enforcement. Instead, firm-level factors such as long-term capital investments and compliance with International financial reporting standards (IFRS) determine the quality of the earnings. The result presented in the essay support the notion proposed by Leuz et al. (2003) that the reforms aimed to bring transparency do not correspond to desired results in economies where investor protection is lower and legal enforcement is weak. The second essay focuses on the relationship between the internal-control mechanism such as the types and levels of ownership and the quality of disclosed earnings in Russia. The empirical analysis shows that the controlling shareholders in Russia use their powers to manipulate the reported performance in order to get private benefits of control. Comparatively, firms owned by the State have significantly better quality of disclosed earnings than other controllers such as oligarchs and foreign corporations. Interestingly, market performance of firms controlled by either State or oligarchs is better than widely held firms. The third essay provides useful evidence on the fact that both ownership structures and economic characteristics are important factors in determining the quality of disclosed earnings in three groups of countries in Europe. Evidence suggests that ownership structure is a more important determinant in developed and transparent countries, while economic determinants are important determinants in developing and transitional countries.
Resumo:
The study investigates whether there is an association between different combinations of emphasis on generic strategies (product differentiation and cost efficiency) and perceived usefulness of management accounting techniques. Previous research has found that cost leadership is associated with traditional accounting techniques and product differentiation with a variety of modern management accounting approaches. The present study focuses on the possible existence of a strategy that mixes these generic strategies. The empirical results suggest that (a) there is no difference in the attitudes towards the usefulness of traditional management accounting techniques between companies that adhere either to a single strategy or a mixed strategy; (b) there is no difference in the attitudes towards modern and traditional techniques between companies that adhere to a single strategy, whether this is product differentiation or cost efficiency, and c) companies that favour a mixed strategy seem to have a more positive attitude towards modern techniques than companies adhering to a single strategy
Resumo:
Den här boken handlar om ansvar för avtalsingrepp, d.v.s. otillbörliga ingripanden i kommersiella avtalsförhållanden, i finsk rätt. Man kan indela avtalsingreppen i åtminstone fyra olika typfall. Det grundläggande typfallet, som utgör kärnan för avtalsingreppsproblematiken, kallas medverkan till avtalsbrott. Frågan är om det är ansvarsgrundande att en i förhållande till ett giltigt avtal utomstående person förmår den ena avtalsparten att begå ett avtalsbrott. Det andra typfallet, medverkan till avtalsenligt upphörande av avtal, innebär att den utomstående personen förmår avtalsparten att säga upp eller på annat sätt avsluta avtalet på ett avtalsenligt sätt. I det tredje typfallet, ingrepp i prekontraktuella förhållanden, förmås den presumtiva avtalsparten att inte ingå avtal. I det fjärde typfallet, utnyttjande av redan begånget avtalsbrott, utnyttjar den utomstående det av avtalsparten redan begångna avtalsbrottet till egen fördel. Frågan analyseras inom ramen för regleringen mot otillbörlig konkurrens (1 § lagen om otillbörligt förfarande i näringsverksamhet) i ljuset av konflikten mellan konkurrensfrihet och respekten för avtal. Argumenten för och emot ett ansvar hämtas bl.a. från rättsekonomins teori om effektiva avtalsbrott, grundrättigheterna och utländsk rätt. Undersökningen resulterar i att den utomstående bör vara ansvarig för medverkan till avtalsbrott. Detta gäller såväl ersättning enligt skadeståndslagen som förbud (såväl förbudsdom som interimistiskt förbud). Frågan om det senare avtalets ogiltighet diskuteras också. I fråga om de tre andra typfallen bör ansvar inte uppkomma.
Resumo:
In this article, I propose to analyze narrative theory from an epistemological standpoint. To do so, I will draw upon both Genettian narratology and what I would call, following Shigeyuki Kuroda, “non-communicational” theories of fictional narrative. In spite of their very unequal popularity, I consider these theories as objective, or, in other words, as debatable and ripe for rational analyses; one can choose between them. The article is made up of three parts. The first part concerns the object of narrative theory, or the narrative as a constructed object, both in narratology (where narrative is likened to a narrative discourse) and in non-communicational narrative theories (where fictional narrative and discourse are mutually exclusive categories). The second part takes up the question of how the claims of these theories do or do not lend themselves to falsification. In particular, Gérard Genette’s claim that “every narrative is, explicitly or not, ‘in the first person’”, will be considered, through the lens of Ann Banfield’s theory of free indirect style. In the third part the reductionism of narrative theory will be dealt with. This leads to a reflection on the role of narrative theory in the analysis of fictional narratives.
Resumo:
The first line medication for mild to moderate Alzheimer s disease (AD) is based on cholinesterase inhibitors which prolong the effect of the neurotransmitter acetylcholine in cholinergic nerve synapses which relieves the symptoms of the disease. Implications of cholinesterases involvement in disease modifying processes has increased interest in this research area. The drug discovery and development process is a long and expensive process that takes on average 13.5 years and costs approximately 0.9 billion US dollars. Drug attritions in the clinical phases are common due to several reasons, e.g., poor bioavailability of compounds leading to low efficacy or toxic effects. Thus, improvements in the early drug discovery process are needed to create highly potent non-toxic compounds with predicted drug-like properties. Nature has been a good source for the discovery of new medicines accounting for around half of the new drugs approved to market during the last three decades. These compounds are direct isolates from the nature, their synthetic derivatives or natural mimics. Synthetic chemistry is an alternative way to produce compounds for drug discovery purposes. Both sources have pros and cons. The screening of new bioactive compounds in vitro is based on assaying compound libraries against targets. Assay set-up has to be adapted and validated for each screen to produce high quality data. Depending on the size of the library, miniaturization and automation are often requirements to reduce solvent and compound amounts and fasten the process. In this contribution, natural extract, natural pure compound and synthetic compound libraries were assessed as sources for new bioactive compounds. The libraries were screened primarily for acetylcholinesterase inhibitory effect and secondarily for butyrylcholinesterase inhibitory effect. To be able to screen the libraries, two assays were evaluated as screening tools and adapted to be compatible with special features of each library. The assays were validated to create high quality data. Cholinesterase inhibitors with various potencies and selectivity were found in natural product and synthetic compound libraries which indicates that the two sources complement each other. It is acknowledged that natural compounds differ structurally from compounds in synthetic compound libraries which further support the view of complementation especially if a high diversity of structures is the criterion for selection of compounds in a library.