986 resultados para Number sense
Resumo:
The primary goal of a phase I trial is to find the maximally tolerated dose (MTD) of a treatment. The MTD is usually defined in terms of a tolerable probability, q*, of toxicity. Our objective is to find the highest dose with toxicity risk that does not exceed q*, a criterion that is often desired in designing phase I trials. This criterion differs from that of finding the dose with toxicity risk closest to q*, that is used in methods such as the continual reassessment method. We use the theory of decision processes to find optimal sequential designs that maximize the expected number of patients within the trial allocated to the highest dose with toxicity not exceeding q*, among the doses under consideration. The proposed method is very general in the sense that criteria other than the one considered here can be optimized and that optimal dose assignment can be defined in terms of patients within or outside the trial. It includes as an important special case the continual reassessment method. Numerical study indicates the strategy compares favourably with other phase I designs.
Resumo:
This study reports a diachronic corpus investigation of common-number pronouns used to convey unknown or otherwise unspecified reference. The study charts agreement patterns in these pronouns in various diachronic and synchronic corpora. The objective is to provide base-line data on variant frequencies and distributions in the history of English, as there are no previous systematic corpus-based observations on this topic. This study seeks to answer the questions of how pronoun use is linked with the overall typological development in English and how their diachronic evolution is embedded in the linguistic and social structures in which they are used. The theoretical framework draws on corpus linguistics and historical sociolinguistics, grammaticalisation, diachronic typology, and multivariate analysis of modelling sociolinguistic variation. The method employs quantitative corpus analyses from two main electronic corpora, one from Modern English and the other from Present-day English. The Modern English material is the Corpus of Early English Correspondence, and the time frame covered is 1500-1800. The written component of the British National Corpus is used in the Present-day English investigations. In addition, the study draws supplementary data from other electronic corpora. The material is used to compare the frequencies and distributions of common-number pronouns between these two time periods. The study limits the common-number uses to two subsystems, one anaphoric to grammatically singular antecedents and one cataphoric, in which the pronoun is followed by a relative clause. Various statistical tools are used to process the data, ranging from cross-tabulations to multivariate VARBRUL analyses in which the effects of sociolinguistic and systemic parameters are assessed to model their impact on the dependent variable. This study shows how one pronoun type has extended its uses in both subsystems, an increase linked with grammaticalisation and the changes in other pronouns in English through the centuries. The variationist sociolinguistic analysis charts how grammaticalisation in the subsystems is embedded in the linguistic and social structures in which the pronouns are used. The study suggests a scale of two statistical generalisations of various sociolinguistic factors which contribute to grammaticalisation and its embedding at various stages of the process.
Resumo:
The work is based on the assumption that words with similar syntactic usage have similar meaning, which was proposed by Zellig S. Harris (1954,1968). We study his assumption from two aspects: Firstly, different meanings (word senses) of a word should manifest themselves in different usages (contexts), and secondly, similar usages (contexts) should lead to similar meanings (word senses). If we start with the different meanings of a word, we should be able to find distinct contexts for the meanings in text corpora. We separate the meanings by grouping and labeling contexts in an unsupervised or weakly supervised manner (Publication 1, 2 and 3). We are confronted with the question of how best to represent contexts in order to induce effective classifiers of contexts, because differences in context are the only means we have to separate word senses. If we start with words in similar contexts, we should be able to discover similarities in meaning. We can do this monolingually or multilingually. In the monolingual material, we find synonyms and other related words in an unsupervised way (Publication 4). In the multilingual material, we ?nd translations by supervised learning of transliterations (Publication 5). In both the monolingual and multilingual case, we first discover words with similar contexts, i.e., synonym or translation lists. In the monolingual case we also aim at finding structure in the lists by discovering groups of similar words, e.g., synonym sets. In this introduction to the publications of the thesis, we consider the larger background issues of how meaning arises, how it is quantized into word senses, and how it is modeled. We also consider how to define, collect and represent contexts. We discuss how to evaluate the trained context classi?ers and discovered word sense classifications, and ?nally we present the word sense discovery and disambiguation methods of the publications. This work supports Harris' hypothesis by implementing three new methods modeled on his hypothesis. The methods have practical consequences for creating thesauruses and translation dictionaries, e.g., for information retrieval and machine translation purposes. Keywords: Word senses, Context, Evaluation, Word sense disambiguation, Word sense discovery.
Resumo:
This paper proposes solutions to three issues pertaining to the estimation of finite mixture models with an unknown number of components: the non-identifiability induced by overfitting the number of components, the mixing limitations of standard Markov Chain Monte Carlo (MCMC) sampling techniques, and the related label switching problem. An overfitting approach is used to estimate the number of components in a finite mixture model via a Zmix algorithm. Zmix provides a bridge between multidimensional samplers and test based estimation methods, whereby priors are chosen to encourage extra groups to have weights approaching zero. MCMC sampling is made possible by the implementation of prior parallel tempering, an extension of parallel tempering. Zmix can accurately estimate the number of components, posterior parameter estimates and allocation probabilities given a sufficiently large sample size. The results will reflect uncertainty in the final model and will report the range of possible candidate models and their respective estimated probabilities from a single run. Label switching is resolved with a computationally light-weight method, Zswitch, developed for overfitted mixtures by exploiting the intuitiveness of allocation-based relabelling algorithms and the precision of label-invariant loss functions. Four simulation studies are included to illustrate Zmix and Zswitch, as well as three case studies from the literature. All methods are available as part of the R package Zmix, which can currently be applied to univariate Gaussian mixture models.
Resumo:
Emergency Medical Dispatchers (EMDs) are charged with taking the calls of those who ring the national emergency number for urgent medical assistance, for dispatching paramedical crews, and for providing as much assistance as can be offered remotely until paramedics arrive. In a job role which is filled with vicarious trauma, emergency situations, pressure, abuse, grief and loss, EMDs are often challenged in maintaining their mental health. The seemingly senseless death of a teenager who commits suicide, the devastating loss of a baby to Sudden Infant Death Syndrome, lives lost through natural disasters, and multiple vehicle fatalities are only a few of the types of experiences EMDs are faced with in the course of their work. However, amongst the horror are positive stories such as coaching a caller to negotiate the birth of a baby and saving a life in jeopardy from heart failure. EMD’s need to cope with the daily challenges of the role; make sense of their work and create meaning in order to have a fulfilled and sustainable career. Although some people in this work struggle greatly to withstand the impacts of vicarious trauma, there are also stories of personal growth. In this Chapter we use a case study to explore how meaning is made for those who are an auditory witness to a continual flux of trauma for others and how the traumatic experiences EMDs bear witness to can also be a catalyst for posttraumatic growth.
Resumo:
Techniques for the introduction of transgenes to control blackheart by particle bombardment and Agrobacterium co-transformation have been developed for pineapple cv. Smooth Cayenne. Polyphenol oxidase (PPO) is the enzyme responsible for blackheart development in pineapple fruit following chilling injury. Sense, anti-sense and hairpin constructs were used as a means to suppress PPO expression in plants. Average transformation efficiency for biolistics was approximately 1% and for Agrobacterium was approximately 1.5%. These results were considered acceptable given the high regeneration potential of between 80-90% from callus cultures. Southern blot analysis revealed stable integration of transgenes with lower copy number found in plants transformed with Agrobacterium compared to those transformed by biolistics. Over 5000 plants from 55 transgenic lines are now undergoing field evaluation in Australia
Resumo:
This dissertation explores the role of the German minister to Helsinki, Wipert von Blücher (1883-1963), within the German-Finnish relations of the late 1930s and the Second World War. Blücher was a key figure – and certainly one of the constants – within German Finland policy and the complex international diplomacy surrounding Finland. Despite representing Hitler’s Germany, he was not a National Socialist in the narrower sense of the term, but a conservative civil servant in the Wilhelmine tradition of the German foreign service. Along with a significant number of career diplomats, Blücher attempted to restrict National Socialist influence on the exercise of German foreign policy, whilst successfully negotiating a modus vivendi with the new regime. The study of his political biography in the Third Reich hence provides a highly representative example of how the traditional élites of Germany were caught in an cycle of conformity and, albeit tacit, opposition. Above all, however, the biographical study of Blücher and his behaviour offers an hitherto unexplored approach to the history of the German-Finnish relations. His unusually long tenure in Helsinki covered the period leading up to the so-called Winter War, which left Blücher severely distraught by Berlin’s effectively pro-Soviet neutrality and brought him close to resigning his post. It further extended to the German-Finnish rapprochement of 1940/41 and the military cooperation of both countries from mid-1941 to 1944. Throughout, Blücher developed a diverse and ambitious set of policy schemes, largely rooted in the tradition of Wilhelmine foreign policy. In their moderation and commonsensical realism, his designs – indeed his entire conception of foreign policy – clashed with the foreign political and ideological premises of the National Socialist regime. In its theoretical grounding, the analysis of Blücher’s political schemes is built on the concept of alternative policy and indebted to A.J.P. Taylor’s definition of dissent in foreign policy. It furthermore rests upon the assumption, introduced by Wolfgang Michalka, that National Socialist foreign policy was dominated by a plurality of rival conceptions, players, and institutions competing for Hitler’s favour (‘Konzeptionen-Pluralismus’). Although primarily a study in the history of international relations, my research has substantially benefited from more recent developments within cultural history, particularly research on nobility and élites, and the renewed focus on autobiography and conceptions of the self. On an abstract level, the thesis touches upon some of the basic components of German politics, political culture, and foreign policy in the first half of the 20th century: national belonging and conflicting loyalties, self-perception and representation, élites and their management of power, the modern history of German conservatism, the nature and practice of diplomacy, and, finally, the intricate relationship between the ethics of the professional civil service and absolute moral principles. Against this backdrop, the examination of Blücher’s role both within Finnish politics and the foreign policy of the Third Reich highlights the biographical dimension of the German-Finnish relationships, while fathoming the determinants of individual human agency in the process.
Resumo:
The development of innovative methods of stock assessment is a priority for State and Commonwealth fisheries agencies. It is driven by the need to facilitate sustainable exploitation of naturally occurring fisheries resources for the current and future economic, social and environmental well being of Australia. This project was initiated in this context and took advantage of considerable recent achievements in genomics that are shaping our comprehension of the DNA of humans and animals. The basic idea behind this project was that genetic estimates of effective population size, which can be made from empirical measurements of genetic drift, were equivalent to estimates of the successful number of spawners that is an important parameter in process of fisheries stock assessment. The broad objectives of this study were to 1. Critically evaluate a variety of mathematical methods of calculating effective spawner numbers (Ne) by a. conducting comprehensive computer simulations, and by b. analysis of empirical data collected from the Moreton Bay population of tiger prawns (P. esculentus). 2. Lay the groundwork for the application of the technology in the northern prawn fishery (NPF). 3. Produce software for the calculation of Ne, and to make it widely available. The project pulled together a range of mathematical models for estimating current effective population size from diverse sources. Some of them had been recently implemented with the latest statistical methods (eg. Bayesian framework Berthier, Beaumont et al. 2002), while others had lower profiles (eg. Pudovkin, Zaykin et al. 1996; Rousset and Raymond 1995). Computer code and later software with a user-friendly interface (NeEstimator) was produced to implement the methods. This was used as a basis for simulation experiments to evaluate the performance of the methods with an individual-based model of a prawn population. Following the guidelines suggested by computer simulations, the tiger prawn population in Moreton Bay (south-east Queensland) was sampled for genetic analysis with eight microsatellite loci in three successive spring spawning seasons in 2001, 2002 and 2003. As predicted by the simulations, the estimates had non-infinite upper confidence limits, which is a major achievement for the application of the method to a naturally-occurring, short generation, highly fecund invertebrate species. The genetic estimate of the number of successful spawners was around 1000 individuals in two consecutive years. This contrasts with about 500,000 prawns participating in spawning. It is not possible to distinguish successful from non-successful spawners so we suggest a high level of protection for the entire spawning population. We interpret the difference in numbers between successful and non-successful spawners as a large variation in the number of offspring per family that survive – a large number of families have no surviving offspring, while a few have a large number. We explored various ways in which Ne can be useful in fisheries management. It can be a surrogate for spawning population size, assuming the ratio between Ne and spawning population size has been previously calculated for that species. Alternatively, it can be a surrogate for recruitment, again assuming that the ratio between Ne and recruitment has been previously determined. The number of species that can be analysed in this way, however, is likely to be small because of species-specific life history requirements that need to be satisfied for accuracy. The most universal approach would be to integrate Ne with spawning stock-recruitment models, so that these models are more accurate when applied to fisheries populations. A pathway to achieve this was established in this project, which we predict will significantly improve fisheries sustainability in the future. Regardless of the success of integrating Ne into spawning stock-recruitment models, Ne could be used as a fisheries monitoring tool. Declines in spawning stock size or increases in natural or harvest mortality would be reflected by a decline in Ne. This would be good for data-poor fisheries and provides fishery independent information, however, we suggest a species-by-species approach. Some species may be too numerous or experiencing too much migration for the method to work. During the project two important theoretical studies of the simultaneous estimation of effective population size and migration were published (Vitalis and Couvet 2001b; Wang and Whitlock 2003). These methods, combined with collection of preliminary genetic data from the tiger prawn population in southern Gulf of Carpentaria population and a computer simulation study that evaluated the effect of differing reproductive strategies on genetic estimates, suggest that this technology could make an important contribution to the stock assessment process in the northern prawn fishery (NPF). Advances in the genomics world are rapid and already a cheaper, more reliable substitute for microsatellite loci in this technology is available. Digital data from single nucleotide polymorphisms (SNPs) are likely to super cede ‘analogue’ microsatellite data, making it cheaper and easier to apply the method to species with large population sizes.
Resumo:
In this paper we study two problems in feedback stabilization. The first is the simultaneous stabilization problem, which can be stated as follows. Given plantsG_{0}, G_{1},..., G_{l}, does there exist a single compensatorCthat stabilizes all of them? The second is that of stabilization by a stable compensator, or more generally, a "least unstable" compensator. Given a plantG, we would like to know whether or not there exists a stable compensatorCthat stabilizesG; if not, what is the smallest number of right half-place poles (counted according to their McMillan degree) that any stabilizing compensator must have? We show that the two problems are equivalent in the following sense. The problem of simultaneously stabilizingl + 1plants can be reduced to the problem of simultaneously stabilizinglplants using a stable compensator, which in turn can be stated as the following purely algebraic problem. Given2lmatricesA_{1}, ..., A_{l}, B_{1}, ..., B_{l}, whereA_{i}, B_{i}are right-coprime for alli, does there exist a matrixMsuch thatA_{i} + MB_{i}, is unimodular for alli?Conversely, the problem of simultaneously stabilizinglplants using a stable compensator can be formulated as one of simultaneously stabilizingl + 1plants. The problem of determining whether or not there exists anMsuch thatA + BMis unimodular, given a right-coprime pair (A, B), turns out to be a special case of a question concerning a matrix division algorithm in a proper Euclidean domain. We give an answer to this question, and we believe this result might be of some independent interest. We show that, given twon times mplantsG_{0} and G_{1}we can generically stabilize them simultaneously provided eithernormis greater than one. In contrast, simultaneous stabilizability, of two single-input-single-output plants, g0and g1, is not generic.
Resumo:
The thesis studies the translation process for the laws of Finland as they are translated from Finnish into Swedish. The focus is on revision practices, norms and workplace procedures. The translation process studied covers three institutions and four revisions. In three separate studies the translation process is analyzed from the perspective of the translations, the institutions and the actors. The general theoretical framework is Descriptive Translation Studies. For the analysis of revisions made in versions of the Swedish translation of Finnish laws, a model is developed covering five grammatical categories (textual revisions, syntactic revisions, lexical revisions, morphological revisions and content revisions) and four norms (legal adequacy, correct translation, correct language and readability). A separate questionnaire-based study was carried out with translators and revisers at the three institutions. The results show that the number of revisions does not decrease during the translation process, and no division of labour can be seen at the different stages. This is somewhat surprising if the revision process is regarded as one of quality control. Instead, all revisers make revisions on every level of the text. Further, the revisions do not necessarily imply errors in the translations but are often the result of revisers following different norms for legal translation. The informal structure of the institutions and its impact on communication, visibility and workplace practices was studied from the perspective of organization theory. The results show weaknesses in the communicative situation, which affect the co-operation both between institutions and individuals. Individual attitudes towards norms and their relative authority also vary, in the sense that revisers largely prioritize legal adequacy whereas translators give linguistic norms a higher value. Further, multi-professional teamwork in the institutions studied shows a kind of teamwork based on individuals and institutions doing specific tasks with only little contact with others. This shows that the established definitions of teamwork, with people co-working in close contact with each other, cannot directly be applied to the workplace procedures in the translation process studied. Three new concepts are introduced: flerstegsrevidering (multi-stage revision), revideringskedja (revision chain) and normsyn (norm attitude). The study seeks to make a contribution to our knowledge of legal translation, translation processes, institutional translation, revision practices and translation norms for legal translation. Keywords: legal translation, translation of laws, institutional translation, revision, revision practices, norms, teamwork, organizational informal structure, translation process, translation sociology, multilingual.
Resumo:
The Queensland Great Barrier Reef line fishery in Australia is regulated via a range of input and output controls including minimum size limits, daily catch limits and commercial catch quotas. As a result of these measures a substantial proportion of the catch is released or discarded. The fate of these released fish is uncertain, but hook-related mortality can potentially be decreased by using hooks that reduce the rates of injury, bleeding and deep hooking. There is also the potential to reduce the capture of non-target species though gear selectivity. A total of 1053 individual fish representing five target species and three non-target species were caught using six hook types including three hook patterns (non-offset circle, J and offset circle), each in two sizes (small 4/0 or 5/0 and large 8/0). Catch rates for each of the hook patterns and sizes varied between species with no consistent results for target or non-target species. When data for all of the fish species were aggregated there was a trend for larger hooks, J hooks and offset circle hooks to cause a greater number of injuries. Using larger hooks was more likely to result in bleeding, although this trend was not statistically significant. Larger hooks were also more likely to foul-hook fish or hook fish in the eye. There was a reduction in the rates of injuries and bleeding for both target and non-target species when using the smaller hook sizes. For a number of species included in our study the incidence of deep hooking decreased when using non-offset circle hooks, however, these results were not consistent for all species. Our results highlight the variability in hook performance across a range of tropical demersal finfish species. The most obvious conservation benefits for both target and non-target species arise from using smaller sized hooks and non-offset circle hooks. Fishers should be encouraged to use these hook configurations to reduce the potential for post-release mortality of released fish.
Resumo:
Making Sense of Mass Education provides an engaging and accessible analysis of traditional issues associated with mass education. The book challenges preconceptions about social class, gender and ethnicity discrimination; highlights the interplay between technology, media, popular culture and schooling; and inspects the relevance of ethics and philosophy in the modern classroom. This new edition has been comprehensively updated to provide current information regarding literature, statistics and legal policies, and significantly expands on the previous edition's structure of derailing traditional myths about education as a point of discussion. It also features two new chapters on Big Data and Globalisation and what they mean for the Australian classroom. Written for students, practising teachers and academics alike, Making Sense of Mass Education summarises the current educational landscape in Australia and looks at fundamental issues in society as they relate to education.
Resumo:
In the future the number of the disabled drivers requiring a special evaluation of their driving ability will increase due to the ageing population, as well as the progress of adaptive technology. This places pressure on the development of the driving evaluation system. Despite quite intensive research there is still no consensus concerning what is the factual situation in a driver evaluation (methodology), which measures should be included in an evaluation (methods), and how an evaluation has to be carried out (practise). In order to find answers to these questions we carried out empirical studies, and simultaneously elaborated upon a conceptual model for driving and a driving evaluation. The findings of empirical studies can be condensed into the following points: 1) A driving ability defined by the on-road driving test is associated with different laboratory measures depending on the study groups. Faults in the laboratory tests predicted faults in the on-road driving test in the novice group, whereas slowness in the laboratory predicted driving faults in the experienced drivers group. 2) The Parkinson study clearly showed that even an experienced clinician cannot reliably accomplish an evaluation of a disabled person’s driving ability without collaboration with other specialists. 3) The main finding of the stroke study was that the use of a multidisciplinary team as a source of information harmonises the specialists’ evaluations. 4) The patient studies demonstrated that the disabled persons themselves, as well as their spouses, are as a rule not reliable evaluators. 5) From the safety point of view, perceptible operations with the control devices are not crucial, but correct mental actions which the driver carries out with the help of the control devices are of greatest importance. 6) Personality factors including higher-order needs and motives, attitudes and a degree of self-awareness, particularly a sense of illness, are decisive when evaluating a disabled person’s driving ability. Personality is also the main source of resources concerning compensations for lower-order physical deficiencies and restrictions. From work with the conceptual model we drew the following methodological conclusions: First, the driver has to be considered as a holistic subject of the activity, as a multilevel hierarchically organised system of an organism, a temperament, an individuality, and a personality where the personality is the leading subsystem from the standpoint of safety. Second, driving as a human form of a sociopractical activity, is also a hierarchically organised dynamic system. Third, in an evaluation of driving ability it is a question of matching these two hierarchically organised structures: a subject of an activity and a proper activity. Fourth, an evaluation has to be person centred but not disease-, function- or method centred. On the basis of our study a multidisciplinary team (practitioner, driving school teacher, psychologist, occupational therapist) is recommended for use in demanding driver evaluations. Primary in a driver’s evaluations is a coherent conceptual model while concrete methods of evaluations may vary. However, the on-road test must always be performed if possible.
Resumo:
[Excerpt] In response to the longstanding and repeated criticisms that HR does not add value to organizations, the past 10 years has seen a burgeoning of research attempting to demonstrate that progressive HR practices result in higher organizational performance. Huselid’s (1995)groundbreaking study demonstrated that a set of HR practices he referred to as High Performance Work Systems (HPWS) were related to accounting profits and market value of firms. Since then, a number of studies have shown similar positive relationships between HR practices and various measures of firm performance. While the studies comprising what I refer to as “first generation SHRM research” have added to what is becoming a more convincing body of evidence of the positive relationship between HR and performance, this body tends to lack sufficient data to demonstrate that the relationship is actually causal in the sense that HR practices, when instituted, lead to higher performance. This next generation of SHRM research will begin (and, in fact has begun) to focus on designing more rigorous tests of the hypothesis that employing progressive HRM systems actually results in higher organizational performance. This generation of research will focus on two aspects: demonstrating the HRM value chain, and proving causality as opposed to merely covariation.