905 resultados para Spelling Harmonization
Resumo:
Con il trascorrere del tempo, le reti di stazioni permanenti GNSS (Global Navigation Satellite System) divengono sempre più un valido supporto alle tecniche di rilevamento satellitare. Esse sono al tempo stesso un’efficace materializzazione del sistema di riferimento e un utile ausilio ad applicazioni di rilevamento topografico e di monitoraggio per il controllo di deformazioni. Alle ormai classiche applicazioni statiche in post-processamento, si affiancano le misure in tempo reale sempre più utilizzate e richieste dall’utenza professionale. In tutti i casi risulta molto importante la determinazione di coordinate precise per le stazioni permanenti, al punto che si è deciso di effettuarla tramite differenti ambienti di calcolo. Sono stati confrontati il Bernese, il Gamit (che condividono l’approccio differenziato) e il Gipsy (che utilizza l’approccio indifferenziato). L’uso di tre software ha reso indispensabile l’individuazione di una strategia di calcolo comune in grado di garantire che, i dati ancillari e i parametri fisici adottati, non costituiscano fonte di diversificazione tra le soluzioni ottenute. L’analisi di reti di dimensioni nazionali oppure di reti locali per lunghi intervalli di tempo, comporta il processamento di migliaia se non decine di migliaia di file; a ciò si aggiunge che, talora a causa di banali errori, oppure al fine di elaborare test scientifici, spesso risulta necessario reiterare le elaborazioni. Molte risorse sono quindi state investite nella messa a punto di procedure automatiche finalizzate, da un lato alla preparazione degli archivi e dall’altro all’analisi dei risultati e al loro confronto qualora si sia in possesso di più soluzioni. Dette procedure sono state sviluppate elaborando i dataset più significativi messi a disposizione del DISTART (Dipartimento di Ingegneria delle Strutture, dei Trasporti, delle Acque, del Rilevamento del Territorio - Università di Bologna). E’ stato così possibile, al tempo stesso, calcolare la posizione delle stazioni permanenti di alcune importanti reti locali e nazionali e confrontare taluni fra i più importanti codici scientifici che assolvono a tale funzione. Per quanto attiene il confronto fra i diversi software si è verificato che: • le soluzioni ottenute dal Bernese e da Gamit (i due software differenziati) sono sempre in perfetto accordo; • le soluzioni Gipsy (che utilizza il metodo indifferenziato) risultano, quasi sempre, leggermente più disperse rispetto a quelle degli altri software e mostrano talvolta delle apprezzabili differenze numeriche rispetto alle altre soluzioni, soprattutto per quanto attiene la coordinata Est; le differenze sono però contenute in pochi millimetri e le rette che descrivono i trend sono comunque praticamente parallele a quelle degli altri due codici; • il citato bias in Est tra Gipsy e le soluzioni differenziate, è più evidente in presenza di determinate combinazioni Antenna/Radome e sembra essere legato all’uso delle calibrazioni assolute da parte dei diversi software. E’ necessario altresì considerare che Gipsy è sensibilmente più veloce dei codici differenziati e soprattutto che, con la procedura indifferenziata, il file di ciascuna stazione di ciascun giorno, viene elaborato indipendentemente dagli altri, con evidente maggior elasticità di gestione: se si individua un errore strumentale su di una singola stazione o se si decide di aggiungere o togliere una stazione dalla rete, non risulta necessario il ricalcolo dell’intera rete. Insieme alle altre reti è stato possibile analizzare la Rete Dinamica Nazionale (RDN), non solo i 28 giorni che hanno dato luogo alla sua prima definizione, bensì anche ulteriori quattro intervalli temporali di 28 giorni, intercalati di sei mesi e che coprono quindi un intervallo temporale complessivo pari a due anni. Si è così potuto verificare che la RDN può essere utilizzata per l’inserimento in ITRF05 (International Terrestrial Reference Frame) di una qualsiasi rete regionale italiana nonostante l’intervallo temporale ancora limitato. Da un lato sono state stimate le velocità ITRF (puramente indicative e non ufficiali) delle stazioni RDN e, dall’altro, è stata effettuata una prova di inquadramento di una rete regionale in ITRF, tramite RDN, e si è verificato che non si hanno differenze apprezzabili rispetto all’inquadramento in ITRF, tramite un congruo numero di stazioni IGS/EUREF (International GNSS Service / European REference Frame, SubCommission for Europe dello International Association of Geodesy).
Resumo:
Main objective of the dissertation is to illustrate how social and educational aspects (in close interaction with other multifunctional aspects in organic agriculture) which are developed on different multifunctional organic farms in Italy and Netherlands, as well as established agricultural policy frameworks in these countries, can be compared with the situation in Croatian organics and can contribute to further developent of organic issues in the Repubic of Croatia. So, through different chapters, the dissertation describes the performance of organic agriculture sectors in Italy, Netherlands and Croatia within the national agricultural policy frameworks, it analyzes the role of national institutions and policy in Croatia in connection with Croatia's status of candidate country for enterance into EU and harmonization of legislation with the CAP, as well as analyzes what is the role of national authorities, universities, research centres, but also of private initiatives, NGOs and cooperatives in organic agriculture in Netherlands, Italy and Croatia. Its main part describes how social and educational aspects are interacting with other multifunctional aspects in organic agriculture and analyzes the benefits and contribution of multifunctional activites performed on organic farms to education, healthy nourishment, environment protection and health care. It also assess the strengths and weaknesses of organic agriculture in all researched countries. The dissertation concludes with development opportunities for multifunctional organic agriculture in Croatia, as well as giving perspectives and recommendations for different approaches on the basis of experiences learned from successful EU models accompanied with some personal ideas and proposals.
Resumo:
La ricerca affronta in modo unitario e nell’ottica europea i multiformi fenomeni della doppia imposizione economica e giuridica, assumendo come paradigma iniziale la tassazione dei dividendi cross-border. Definito lo statuto giuridico della doppia imposizione, se ne motiva la contrarietà all’ordinamento europeo e si indagano gli strumenti comunitari per raggiungere l’obiettivo europeo della sua eliminazione. In assenza di un’armonizzazione positiva, il risultato sostanziale viene raggiunto grazie all’integrazione negativa. Si dimostra che il riserbo della Corte di Giustizia di fronte a opzioni di politica fiscale è soltanto un’impostazione di facciata, valorizzando le aperture giurisprudenziali per il suo superamento. Questi, in sintesi, i passaggi fondamentali. Si parte dall’evoluzione delle libertà fondamentali in diritti di rango costituzionale, che ne trasforma il contenuto economico e la portata giuridica, attribuendo portata costituzionale ai valori di neutralità e non restrizione. Si evidenzia quindi il passaggio dal divieto di discriminazioni al divieto di restrizioni, constatando il fallimento del tentativo di configurare il divieto di doppia imposizione come principio autonomo dell’ordinamento europeo. Contemporaneamente, però, diventa opportuno riesaminare la distinzione tra doppia imposizione economica e giuridica, e impostare un unico inquadramento teorico della doppia imposizione come ipotesi paradigmatica di restrizione alle libertà. Conseguentemente, viene razionalizzato l’impianto giurisprudenziale delle cause di giustificazione. Questo consente agevolmente di legittimare scelte comunitarie per la ripartizione dei poteri impositivi tra Stati Membri e l’attribuzione delle responsabilità per l’eliminazione degli effetti della doppia imposizione. In conclusione, dunque, emerge una formulazione europea dell’equilibrato riparto di poteri impositivi a favore dello Stato della fonte. E, accanto ad essa, una concezione comunitaria del principio di capacità contributiva, con implicazioni dirompenti ancora da verificare. Sul piano metodologico, l’analisi si concentra criticamente sull’operato della Corte di Giustizia, svelando punti di forza e di debolezza della sua azione, che ha posto le basi per la risposta europea al problema della doppia imposizione.
Resumo:
Che rapporto intercorre tra un’opera letteraria e una sua interpretazione? Che cosa fa sì che la prima supporti la seconda? Come possiamo discernere un’interpretazione valida da una che non lo è ? Come può una stessa opera avere interpretazioni differenti e a volte incompatibili tra loro? Assumendo come punto di partenza la proposta di Nelson Goodman di qualificare l’opera letteraria come allografica e, quindi, di definire l’identità dell’opera sulla base della sua compitazione, cercare un risposta alle domande proposte implica un riflessione tanto sul linguaggio, quale strumento simbolico, quanto sulle modalità di riferimento proprie delle opere letterarie. In particolare, di fronte al dissolversi del mondo nella molteplicità delle versioni che il linguaggio può offrire di esso, una peculiare concezione della metafora, intesa come proiezione di un regno del linguaggio su un altro regno dello stesso, si qualifica come un buon modello per la comprensione del rapporto che lega opere letterarie e loro interpretazioni. In tal modo l’opera stessa non solo diviene significativa, ma, attraverso tale significazione, riesce anche a farsi produttiva, modificando, ampliando, ristrutturando la versione dal mondo dalla quale l’interprete-lettore prende le mosse. Ciascuna lettura di un’opera letteraria può infatti essere concepita come una via attraverso la quale ciò che nell’opera è detto viene proiettato sulla visione del mondo propria dell’interprete e di quanti possono condividerne il punto di vista. In tal modo le interpretazioni pongono le opere cui si riferiscono nelle condizioni di fornire un apporto significativo tanto alla comprensione quanto alla costituzione della nostra versione del mondo. E se ciò può avvenire in diversi modi, mutando le interpretazioni a seconda di chi le produce e delle circostanze in cui sorgono, l’opera evita la dissoluzione in virtù della compitazione che la identifica.
Resumo:
L’armonizzazione fiscale è una importante sfida che l’Unione Europea si trova ad affrontare per la completa realizzazione del mercato interno. Le istituzioni comunitarie, tuttavia, non dispongono delle competenze legislative per intervenire direttamente negli ordinamenti tributari degli Stati membri. Svolgendo una analisi del contesto legislativo vigente, ed esaminando le prospettive de iure condendo della materia fiscale dell’Unione, il presente lavoro cerca di comprendere le prospettive di evoluzione del sistema, sia dal punto di vista della normativa fiscale sostanziale, che procedimentale. Mediante la disciplina elaborata a livello comunitario che regola la cooperazione amministrativa in materia fiscale, con particolare riferimento alle direttive relative allo scambio di informazioni e all’assistenza alla riscossione (dir. 2011/16/UE e dir. 2010/24/UE) si permette alle Amministrazioni degli Stati membri di avere accesso ai reciproci ordinamenti giuridici, e conoscerne i meccanismi. L’attuazione di tali norme fa sì che ciascun ordinamento abbia l’opportunità di importare le best practices implementate dagli altri Stati. L’obiettivo sarà quello di migliorare il proprio procedimento amministrativo tributario, da un lato, e di rendere più immediati gli scambi di informazione e la cooperazione alla riscossione, dall’altro. L’armonizzazione fiscale all’interno dell’Unione verrebbe perseguita, anziché mediante un intervento a livello europeo, attraverso un coordinamento “dal basso” degli ordinamenti fiscali, realizzato attraverso l’attività di cooperazione delle amministrazioni che opereranno su un substrato di regole condivise. La maggiore apertura delle amministrazioni fiscali dei Paesi membri e la maggiore spontaneità degli scambi di informazioni, ha una efficacia deterrente di fenomeni di evasione e di sottrazione di imposta posti in essere al fine di avvantaggiarsi delle differenze dei sistemi impositivi dei vari paesi. Nel lungo periodo ciò porterà verosimilmente, gli Stati membri a livellare i sistemi impositivi, dal momento che i medesimi non avranno più interesse ad utilizzare la leva fiscale per generare una concorrenza tra gli ordinamenti.
Resumo:
Principale obiettivo della ricerca è quello di ricostruire lo stato dell’arte in materia di sanità elettronica e Fascicolo Sanitario Elettronico, con una precipua attenzione ai temi della protezione dei dati personali e dell’interoperabilità. A tal fine sono stati esaminati i documenti, vincolanti e non, dell’Unione europea nonché selezionati progetti europei e nazionali (come “Smart Open Services for European Patients” (EU); “Elektronische Gesundheitsakte” (Austria); “MedCom” (Danimarca); “Infrastruttura tecnologica del Fascicolo Sanitario Elettronico”, “OpenInFSE: Realizzazione di un’infrastruttura operativa a supporto dell’interoperabilità delle soluzioni territoriali di fascicolo sanitario elettronico nel contesto del sistema pubblico di connettività”, “Evoluzione e interoperabilità tecnologica del Fascicolo Sanitario Elettronico”, “IPSE - Sperimentazione di un sistema per l’interoperabilità europea e nazionale delle soluzioni di Fascicolo Sanitario Elettronico: componenti Patient Summary e ePrescription” (Italia)). Le analisi giuridiche e tecniche mostrano il bisogno urgente di definire modelli che incoraggino l’utilizzo di dati sanitari ed implementino strategie effettive per l’utilizzo con finalità secondarie di dati sanitari digitali , come Open Data e Linked Open Data. L’armonizzazione giuridica e tecnologica è vista come aspetto strategico per ridurre i conflitti in materia di protezione di dati personali esistenti nei Paesi membri nonché la mancanza di interoperabilità tra i sistemi informativi europei sui Fascicoli Sanitari Elettronici. A questo scopo sono state individuate tre linee guida: (1) armonizzazione normativa, (2) armonizzazione delle regole, (3) armonizzazione del design dei sistemi informativi. I principi della Privacy by Design (“prottivi” e “win-win”), così come gli standard del Semantic Web, sono considerate chiavi risolutive per il suddetto cambiamento.
Resumo:
The recent financial crisis triggered an increasing demand for financial regulation to counteract the potential negative economic effects of the evermore complex operations and instruments available on financial markets. As a result, insider trading regulation counts amongst the relatively recent but particularly active regulation battles in Europe and overseas. Claims for more transparency and equitable securities markets proliferate, ranging from concerns about investor protection to global market stability. The internationalization of the world’s securities market has challenged traditional notions of regulation and enforcement. Considering that insider trading is currently forbidden all over Europe, this study follows a law and economics approach in identifying how this prohibition should be enforced. More precisely, the study investigates first whether criminal law is necessary under all circumstances to enforce insider trading; second, if it should be introduced at EU level. This study provides evidence of law and economics theoretical logic underlying the legal mechanisms that guide sanctioning and public enforcement of the insider trading prohibition by identifying optimal forms, natures and types of sanctions that effectively induce insider trading deterrence. The analysis further aims to reveal the economic rationality that drives the potential need for harmonization of criminal enforcement of insider trading laws within the European environment by proceeding to a comparative analysis of the current legislations of height selected Member States. This work also assesses the European Union’s most recent initiative through a critical analysis of the proposal for a Directive on criminal sanctions for Market Abuse. Based on the conclusions drawn from its close analysis, the study takes on the challenge of analyzing whether or not the actual European public enforcement of the laws prohibiting insider trading is coherent with the theoretical law and economics recommendations, and how these enforcement practices could be improved.
Resumo:
It has been suggested that there are several distinct phenotypes of childhood asthma or childhood wheezing. Here, we review the research relating to these phenotypes, with a focus on the methods used to define and validate them. Childhood wheezing disorders manifest themselves in a range of observable (phenotypic) features such as lung function, bronchial responsiveness, atopy and a highly variable time course (prognosis). The underlying causes are not sufficiently understood to define disease entities based on aetiology. Nevertheless, there is a need for a classification that would (i) facilitate research into aetiology and pathophysiology, (ii) allow targeted treatment and preventive measures and (iii) improve the prediction of long-term outcome. Classical attempts to define phenotypes have been one-dimensional, relying on few or single features such as triggers (exclusive viral wheeze vs. multiple trigger wheeze) or time course (early transient wheeze, persistent and late onset wheeze). These definitions are simple but essentially subjective. Recently, a multi-dimensional approach has been adopted. This approach is based on a wide range of features and relies on multivariate methods such as cluster or latent class analysis. Phenotypes identified in this manner are more complex but arguably more objective. Although phenotypes have an undisputed standing in current research on childhood asthma and wheezing, there is confusion about the meaning of the term 'phenotype' causing much circular debate. If phenotypes are meant to represent 'real' underlying disease entities rather than superficial features, there is a need for validation and harmonization of definitions. The multi-dimensional approach allows validation by replication across different populations and may contribute to a more reliable classification of childhood wheezing disorders and to improved precision of research relying on phenotype recognition, particularly in genetics. Ultimately, the underlying pathophysiology and aetiology will need to be understood to properly characterize the diseases causing recurrent wheeze in children.
Resumo:
Many pregnancy and birth cohort studies investigate the health effects of early-life environmental contaminant exposure. An overview of existing studies and their data is needed to improve collaboration, harmonization, and future project planning.
Resumo:
With the publication of the quality guideline ICH Q9 "Quality Risk Management" by the International Conference on Harmonization, risk management has already become a standard requirement during the life cycle of a pharmaceutical product. Failure mode and effect analysis (FMEA) is a powerful risk analysis tool that has been used for decades in mechanical and electrical industries. However, the adaptation of the FMEA methodology to biopharmaceutical processes brings about some difficulties. The proposal presented here is intended to serve as a brief but nevertheless comprehensive and detailed guideline on how to conduct a biopharmaceutical process FMEA. It includes a detailed 1-to-10-scale FMEA rating table for occurrence, severity, and detectability of failures that has been especially designed for typical biopharmaceutical processes. The application for such a biopharmaceutical process FMEA is widespread. It can be useful whenever a biopharmaceutical manufacturing process is developed or scaled-up, or when it is transferred to a different manufacturing site. It may also be conducted during substantial optimization of an existing process or the development of a second-generation process. According to their resulting risk ratings, process parameters can be ranked for importance and important variables for process development, characterization, or validation can be identified. LAY ABSTRACT: Health authorities around the world ask pharmaceutical companies to manage risk during development and manufacturing of pharmaceuticals. The so-called failure mode and effect analysis (FMEA) is an established risk analysis tool that has been used for decades in mechanical and electrical industries. However, the adaptation of the FMEA methodology to pharmaceutical processes that use modern biotechnology (biopharmaceutical processes) brings about some difficulties, because those biopharmaceutical processes differ from processes in mechanical and electrical industries. The proposal presented here explains how a biopharmaceutical process FMEA can be conducted. It includes a detailed 1-to-10-scale FMEA rating table for occurrence, severity, and detectability of failures that has been especially designed for typical biopharmaceutical processes. With the help of this guideline, different details of the manufacturing process can be ranked according to their potential risks, and this can help pharmaceutical companies to identify aspects with high potential risks and to react accordingly to improve the safety of medicines.
Resumo:
Prediction of clinical outcome in cancer is usually achieved by histopathological evaluation of tissue samples obtained during surgical resection of the primary tumor. Traditional tumor staging (AJCC/UICC-TNM classification) summarizes data on tumor burden (T), presence of cancer cells in draining and regional lymph nodes (N) and evidence for metastases (M). However, it is now recognized that clinical outcome can significantly vary among patients within the same stage. The current classification provides limited prognostic information, and does not predict response to therapy. Recent literature has alluded to the importance of the host immune system in controlling tumor progression. Thus, evidence supports the notion to include immunological biomarkers, implemented as a tool for the prediction of prognosis and response to therapy. Accumulating data, collected from large cohorts of human cancers, has demonstrated the impact of immune-classification, which has a prognostic value that may add to the significance of the AJCC/UICC TNM-classification. It is therefore imperative to begin to incorporate the 'Immunoscore' into traditional classification, thus providing an essential prognostic and potentially predictive tool. Introduction of this parameter as a biomarker to classify cancers, as part of routine diagnostic and prognostic assessment of tumors, will facilitate clinical decision-making including rational stratification of patient treatment. Equally, the inherent complexity of quantitative immunohistochemistry, in conjunction with protocol variation across laboratories, analysis of different immune cell types, inconsistent region selection criteria, and variable ways to quantify immune infiltration, all underline the urgent requirement to reach assay harmonization. In an effort to promote the Immunoscore in routine clinical settings, an international task force was initiated. This review represents a follow-up of the announcement of this initiative, and of the J Transl Med. editorial from January 2012. Immunophenotyping of tumors may provide crucial novel prognostic information. The results of this international validation may result in the implementation of the Immunoscore as a new component for the classification of cancer, designated TNM-I (TNM-Immune).
Resumo:
Globalisation in coronary stent research calls for harmonization of clinical endpoint definitions and event adjudication. Little has been published about the various processes used for event adjudication or their impact on outcome reporting.
Resumo:
From the moment of their birth, a person's life is determined by their sex. Ms. Goroshko wants to know why this difference is so striking, why society is so concerned to sustain it, and how it is able to persist even when certain national or behavioural stereotypes are erased between people. She is convinced of the existence of not only social, but biological differences between men and women, and set herself the task, in a manuscript totalling 126 pages, written in Ukrainian and including extensive illustrations, of analysing these distinctions as they are manifested in language. She points out that, even before 1900, certain stylistic differences between the ways that men and women speak had been noted. Since then it has become possible, for instance in the case of Japanese, to point to examples of male and female sub-languages. In general, one can single out the following characteristics. Males tend to write with less fluency, to refer to events in a verb-phrase, to be time-oriented, to involve themselves more in their references to events, to locate events in their personal sphere of activity, and to refer less to others. Therefore, concludes Ms Goroshko, the male is shown to be more active, more ego-involved in what he does, and less concerned about others. Women, in contrast, were more fluent, referred to events in a noun-phrase, were less time-oriented, tended to be less involved in their event-references, locate events within their interactive community and refer more to others. They spent much more time discussing personal and domestic subjects, relationship problems, family, health and reproductive matters, weight, food and clothing, men, and other women. As regards discourse strategies, Ms Goroshko notes the following. Men more often begin a conversation, they make more utterances, these utterances are longer, they make more assertions, speak less carefully, generally determine the topic of conversation, speak more impersonally, use more vulgar expressions, and use fewer diminutives and more imperatives. Women's speech strategies, apart from being the opposite of those enumerated above, also contain more euphemisms, polite forms, apologies, laughter and crying. All of the above leads Ms. Goroshko to conclude that the differences between male and female speech forms are more striking than the similarities. Furthermore she is convinced that the biological divergence between the sexes is what generates the verbal divergence, and that social factors can only intensify or diminish the differentiation in verbal behaviour established by the sex of a person. Bearing all this in mind, Ms Goroshko set out to construct a grammar of male and female styles of speaking within Russian. One of her most important research tools was a certain type of free association test. She took a list comprising twelve stimuli (to love, to have, to speak, to fuck, a man, a woman, a child, the sky, a prayer, green, beautiful) and gave it to a group of participants specially selected, according to a preliminary psychological testing, for the high levels of masculinity or femininity they displayed. Preliminary responses revealed that the female reactions were more diverse than the male ones, there were more sentences and word combinations in the female reactions, men gave more negative responses to the stimulus and sometimes didn't want to react at all, women reacted more to adjectives and men to nouns, and that, surprisingly, women coloured more negatively their reactions to the words man, to love and a child (Ms. Goroshko is inclined to attribute this to the present economic situation in Russia). Another test performed by Ms. Goroshko was the so-called "defective text" developed by A.A. Brudny. All participants were distributed with packets of complete sentences, which had been taken from a text and then mixed at random. The task was to reconstruct the original text. There were three types of test, the first descriptive, the second narrative, and the third logical. Ms. Goroshko created computer programmes to analyse the results. She found that none of the reconstructed texts was coincident with the original, differing both from the original text and amongst themselves and that there were many more disparities in the male than the female texts. In the descriptive and logical texts the differences manifested themselves more clearly in the male texts, and in the narrative texts in the female texts. The widest dispersal of values was observed at the outset, while the female text ending was practically coincident with the original (in contrast to the male ending). The greatest differences in text reconstruction for both males and females were registered in the middle of the texts. Women, Ms. Goroshko claims, were more sensitive to the semantic structure of the texts, since they assembled the narrative text much more accurately than the other two, while the men assembled more accurately the logical text. Texts written by women were assembled more accurately by women and texts by men by men. On the basis of computer analysis, Ms. Goroshko found that female speech was substantially more emotional. It was expressed by various means, hyperbole, metaphor, comparisons, epithets, ways of enumeration, and with the aid of interjections, rhetorical questions, exclamations. The level of literacy was higher for female speech, and there were fewer mistakes in grammar and spelling in female texts. The last stage of Ms Goroshko's research concerned the social stereotypes of beliefs about men and women in Russian society today. A large number of respondents were asked questions such as "What merits must a woman possess?", "What are male vices and virtues?", etc. After statistical manipulation, an image of modern man and woman, as it exists in the minds of modern Russian men and women, emerged. Ms. Goroshko believes that her findings are significant not only within the field of linguistics. She has already successfully worked on anonymous texts and been able to decide on the sex of the author and consequently believes that in the future her research may even be of benefit to forensic science.
Resumo:
From the moment of their birth, a person's life is determined by their sex. Goroshko wanted to find out why this difference is so striking, why society is so determined to sustain it, and how it can persist even when certain national or behavioural stereotypes are erased. She believes there are both social and biological differences between men and women, and set out to analyse these distinctions as they are manifested in language. Certain general characteristics can be identified. Males tend to write with less fluency, to refer to events in a verb phrase, to be time-oriented, to involve themselves more in their references to events, to locate events in their personal sphere of activity, and to refer less to others. Goroshko therefore concludes that the male is more active, more ego-involved in what he does and less concerned about others. Women were more fluent, referred to events in a noun-phrase, were less time-oriented, tended to be less involved in their event references, located events within their interactive community, and referred more to others. They spent much more time discussing personal and domestic subjects, relationship problems, family, health and reproductive matters, weight, food and clothing, men, and other women. Computer analysis showed that female speech was substantially more emotional, using hyperbole, metaphor, comparisons, epithets, ways of enumeration, interjections, rhetorical questions and exclamations. The level of literacy was higher in female speech, and women made fewer grammatical and spelling mistakes in written texts. Goroshko believes that her findings have relevance beyond the linguistic field. When working on anonymous texts she has been able to decide on the sex of the author and so believes that her research may even be of benefit to forensic science.
Resumo:
From the beginning of the standardisation of language in Bosnia and Herzegovina, i.e. from the acceptance of Karadzic's phonetic spelling in the mid-19th century, to the present day when there are three different language standards in force - Bosniac (Muslim), Croatian and Serbian, language in Bosnia and Herzegovina has been a subject of political conflict. Documents on language policy from this period show the degree to which domestic and foreign political factors influenced the standard language issue, beginning with the very appellation for the specific norm regulation. The material analysed (proclamations by political, cultural and other organisations as well as corresponding constitutional and statutory provisions on language use) shows the differing treatment of the standard language in Bosnia and Herzegovina in different historical periods. During the period of Turkish rule (until 1878) there was no real political interest in the issue. Under Austro-Hungarian rule (1878-1918) there was an attempt to use the language as a means of forming a united Bosnian nation, but this was later abandoned. During the first Yugoslavia (1918-1941) a uniform solution was imposed on Bosnia and Herzegovina, as throughout the Serbo-Croatian language area, while under the Independent State of Croatia (1941-1945), the official language of Bosnia and Herzegovina was Croatian. The period from 1945 to 1991 had two phases: the first a standard language unity of Serbs, Croats, Muslims and Montenegrins (until 1965), and the second a gradual but stormy separation of national languages, which has been largely completed since 1991. The introductory study includes a detailed analysis of all the expressions used, with special reference to the present state, and accompanies the collection of documents which represent the main outcome of the research.