919 resultados para Run
Resumo:
The inflammatory skin disease pyoderma gangrenosum is characterized by destructive ulceration, typically occurring on the calves and thighs and less commonly on the buttocks and face. Lesions vary in size and may be multiple, often rapidly ulcerating to form deep painful wounds. Ulcers characteristically have ragged purple edges that overhang. In many patients a concomitant condition can be identified such as inflammatory bowel disease, rheumatoid arthritis, chronic autoimmune hepatitis, and various hematologic and solid tumours (1,2). Treatment of these ulcers in the past has been disappointing. The large lesions usually run a chronic course and heal very slowly, with traditional dressings often in combination with systemic steroids or immunosuppressants. Since 1998, a small number of case have been reported of adults with pyoderma gangrenosum whose lesions heal with the use of topical tacrolimus (FK506) (2–4). We report, to the best of our knowledge, the first successful treatment of a child with pyoderma gangrenosum using topical tacrolimus.
Resumo:
The ergodic or long-run average cost control problem for a partially observed finite-state Markov chain is studied via the associated fully observed separated control problem for the nonlinear filter. Dynamic programming equations for the latter are derived, leading to existence and characterization of optimal stationary policies.
Resumo:
In this paper, a new high precision focused word sense disambiguation (WSD) approach is proposed, which not only attempts to identify the proper sense for a word but also provides the probabilistic evaluation for the identification confidence at the same time. A novel Instance Knowledge Network (IKN) is built to generate and maintain semantic knowledge at the word, type synonym set and instance levels. Related algorithms based on graph matching are developed to train IKN with probabilistic knowledge and to use IKN for probabilistic word sense disambiguation. Based on the Senseval-3 all-words task, we run extensive experiments to show the performance enhancements in different precision ranges and the rationality of probabilistic based automatic confidence evaluation of disambiguation. We combine our WSD algorithm with five best WSD algorithms in senseval-3 all words tasks. The results show that the combined algorithms all outperform the corresponding algorithms.
Resumo:
Using an OLG-model with endogenous growth and public capital we show, that an international capital tax competition leads to inefficiently low tax rates, and as a consequence to lower welfare levels and growth rates. Each national government has an incentive to reduce the capital income tax rates in its effort to ensure that this policy measure increases the domestic private capital stock, domestic income and domestic economic growth. This effort is justified as long as only one country applies this policy. However, if all countries follow this path then all of them will be made worse off in the long run.
Resumo:
This paper reports results from a qualitative evaluation of a compulsory pre-Learner driver education program within the Australian Capital Territory(ACT), Australia. Two methods were used to obtain feedback from those involved in the delivery of the program as well as those who participated in programs. The first, semi-structured interviews, was undertaken with class room teachers who run the program in their schools, group facilitators running the program with more mature-age students at private facilities (n = 15 in total), and former participants in both school-based and private-based versions of the program (n = 19). The second method used an on-line survey for students (n = 79). Results from both methods were consistent with each other, indicating that strengths of the program were perceived as being its interactive components and the high level of engagement of the target audience. There was strong support from young and mature-age students for the program to remain compulsory. However, consistent with other findings on novice driver education, mature-age participants identified that the program was less relevant to them. It may be that to have greater relevance to mature-age learners, content could address and challenge perceptions about behaviours other than intentional high-risk behaviours (e.g. low level speeding, fatigue) as well as encourage planning/strategies to avoid them. While a longer term, outcome focussed, evaluation of the pre-learner education program is needed, this study suggests that the program is well received by pre-licence drivers and that teachers and facilitators perceive it as both effective and beneficial.
Resumo:
We believe the Babcock-Leighton process of poloidal field generation to be the main source of irregularity in the solar cycle. The random nature of this process may make the poloidal field in one hemisphere stronger than that in the other hemisphere at the end of a cycle. We expect this to induce an asymmetry in the next sunspot cycle. We look for evidence of this in the observational data and then model it theoretically with our dynamo code. Since actual polar field measurements exist only from the 1970s, we use the polar faculae number data recorded by Sheeley (1991, 2008) as a proxy of the polar field and estimate the hemispheric asymmetry of the polar field in different solar minima during the major part of the twentieth century. This asymmetry is found to have a reasonable correlation with the asymmetry of the next cycle. We then run our dynamo code by feeding information about this asymmetry at the successive minima and compare the results with observational data. We find that the theoretically computed asymmetries of different cycles compare favorably with the observational data, with the correlation coefficient being 0.73. Due to the coupling between the two hemispheres, any hemispheric asymmetry tends to get attenuated with time. The hemispheric asymmetry of a cycle either from observational data or from theoretical calculations statistically tends to be less than the asymmetry in the polar field (as inferred from the faculae data) in the preceding minimum. This reduction factor turns out to be 0.43 and 0.51 respectively in observational data and theoretical simulations.
Resumo:
Campaigning in Australian election campaigns at local, state, and federal levels is fundamentally affected by the fact that voting is compulsory in Australia, with citizens who are found to have failed to cast their vote subject to fines. This means that - contrary to the situation in most other nations – elections are decided not by which candidate or party has managed to encourage the largest number of nominal supporters to make the effort to cast their vote, but by some 10-20% of genuine ‘swinging voters’ who change their party preferences from one election to the next. Political campaigning is thus aimed less at existing party supporters (so-called ‘rusted on’ voters whose continued support for the party is essentially taken for granted) than at this genuinely undecided middle of the electorate. Over the past decades, this has resulted in a comparatively timid, vague campaigning style from both major party blocs (the progressive Australian Labor Party [ALP] and the conservative Coalition of the Liberal and National Parties [L/NP]). Election commitments that run the risk of being seen as too partisan and ideological are avoided as they could scare away swinging voters, and recent elections have been fought as much (or more) on the basis of party leaders’ perceived personas as they have on stated policies, even though Australia uses a parliamentary system in which the Prime Minister and state Premiers are elected by their party room rather than directly by voters. At the same time, this perceived lack of distinctiveness in policies between the major parties has also enabled the emergence of new, smaller parties which (under Australia’s Westminster-derived political system) have no hope of gaining a parliamentary majority but could, in a close election, come to hold the balance of power and thus exert disproportionate influence on a government which relies on their support.
Resumo:
The feasibility of different modern analytical techniques for the mass spectrometric detection of anabolic androgenic steroids (AAS) in human urine was examined in order to enhance the prevalent analytics and to find reasonable strategies for effective sports drug testing. A comparative study of the sensitivity and specificity between gas chromatography (GC) combined with low (LRMS) and high resolution mass spectrometry (HRMS) in screening of AAS was carried out with four metabolites of methandienone. Measurements were done in selected ion monitoring mode with HRMS using a mass resolution of 5000. With HRMS the detection limits were considerably lower than with LRMS, enabling detection of steroids at low 0.2-0.5 ng/ml levels. However, also with HRMS, the biological background hampered the detection of some steroids. The applicability of liquid-phase microextraction (LPME) was studied with metabolites of fluoxymesterone, 4-chlorodehydromethyltestosterone, stanozolol and danazol. Factors affecting the extraction process were studied and a novel LPME method with in-fiber silylation was developed and validated for GC/MS analysis of the danazol metabolite. The method allowed precise, selective and sensitive analysis of the metabolite and enabled simultaneous filtration, extraction, enrichment and derivatization of the analyte from urine without any other steps in sample preparation. Liquid chromatographic/tandem mass spectrometric (LC/MS/MS) methods utilizing electrospray ionization (ESI), atmospheric pressure chemical ionization (APCI) and atmospheric pressure photoionization (APPI) were developed and applied for detection of oxandrolone and metabolites of stanozolol and 4-chlorodehydromethyltestosterone in urine. All methods exhibited high sensitivity and specificity. ESI showed, however, the best applicability, and a LC/ESI-MS/MS method for routine screening of nine 17-alkyl-substituted AAS was thus developed enabling fast and precise measurement of all analytes with detection limits below 2 ng/ml. The potential of chemometrics to resolve complex GC/MS data was demonstrated with samples prepared for AAS screening. Acquired full scan spectral data (m/z 40-700) were processed by the OSCAR algorithm (Optimization by Stepwise Constraints of Alternating Regression). The deconvolution process was able to dig out from a GC/MS run more than the double number of components as compared with the number of visible chromatographic peaks. Severely overlapping components, as well as components hidden in the chromatographic background could be isolated successfully. All studied techniques proved to be useful analytical tools to improve detection of AAS in urine. Superiority of different procedures is, however, compound-dependent and different techniques complement each other.
Resumo:
The literature contains many examples of digital procedures for the analytical treatment of electroencephalograms, but there is as yet no standard by which those techniques may be judged or compared. This paper proposes one method of generating an EEG, based on a computer program for Zetterberg's simulation. It is assumed that the statistical properties of an EEG may be represented by stationary processes having rational transfer functions and achieved by a system of software fillers and random number generators.The model represents neither the neurological mechanism response for generating the EEG, nor any particular type of EEG record; transient phenomena such as spikes, sharp waves and alpha bursts also are excluded. The basis of the program is a valid ‘partial’ statistical description of the EEG; that description is then used to produce a digital representation of a signal which if plotted sequentially, might or might not by chance resemble an EEG, that is unimportant. What is important is that the statistical properties of the series remain those of a real EEG; it is in this sense that the output is a simulation of the EEG. There is considerable flexibility in the form of the output, i.e. its alpha, beta and delta content, which may be selected by the user, the same selected parameters always producing the same statistical output. The filtered outputs from the random number sequences may be scaled to provide realistic power distributions in the accepted EEG frequency bands and then summed to create a digital output signal, the ‘stationary EEG’. It is suggested that the simulator might act as a test input to digital analytical techniques for the EEG, a simulator which would enable at least a substantial part of those techniques to be compared and assessed in an objective manner. The equations necessary to implement the model are given. The program has been run on a DEC1090 computer but is suitable for any microcomputer having more than 32 kBytes of memory; the execution time required to generate a 25 s simulated EEG is in the region of 15 s.
Resumo:
A new method of specifying the syntax of programming languages, known as hierarchical language specifications (HLS), is proposed. Efficient parallel algorithms for parsing languages generated by HLS are presented. These algorithms run on an exclusive-read exclusive-write parallel random-access machine. They require O(n) processors and O(log2n) time, where n is the length of the string to be parsed. The most important feature of these algorithms is that they do not use a stack.
Resumo:
This is the fourth TAProViz workshop being run at the 13th International Conference on Business Process Management (BPM). The intention this year is to consolidate on the results of the previous successful workshops by further developing this important topic, identifying the key research topics of interest to the BPM visualization community. Towards this goal, the workshop topics were extended to human computer interaction and related domains. Submitted papers were evaluated by at least three program committee members, in a double blind manner, on the basis of significance, originality, technical quality and exposition. Three full and one position papers were accepted for presentation at the workshop. In addition, we invited a keynote speaker, Jakob Pinggera, a postdoctoral researcher at the Business Process Management Research Cluster at the University of Innsbruck, Austria.
Resumo:
This study examines the transformation of the society of estates in the Finnish Grand Duchy through the case study of Senator Lennart Gripenberg and his family circle. While national borders and state structures changed, the connections between old ruling elite families remained intact as invisible family networks, ownership relations, economic collaboration and power of military families. These were the cornerstones of trust, which helped to strengthen positions gained in society. Also, these connections often had a central if unperceivable impact on social development and modernization. Broadly speaking, the intergenerational social reproduction made it possible for this network of connections to remain in power and, as an imperceptible factor, also influenced short-term developments in the long run. Decisions which in the short term appeared unproductive, would in the long run produce cumulative immaterial and material capital across generations as long-term investments. Social mobility, then, is a process which clearly takes several generations to become manifest. The study explores long-term strategies of reproducing and transferring the capital accumulated in multinational elite networks. Also, what was the relationship of these strategies to social change? For the representatives of the military estate the nobility and for those men of the highest estates who had benefited from military training, this very education of a technical-military nature was the key to steering, controlling and dealing with the challenges following the industrial breakthrough. The disintegration of the society of estates and the rising educational standards also increased the influence of those professionals previously excluded, which served to intensify competition for positions of power. The family connections highlighted in this study overlapped in many ways, working side by side and in tandem to manage the economic and political life in Finland, Russia and Sweden. The analysis of these ties has opened up a new angle to economic co-operation, for example, as seen in the position of such family networks not only in Finnish, but also Swedish and Russian corporations and in the long historical background of the collaboration. This also highlights in a new way the role of women in transferring the cumulative social capital and as silent business partners. The marriage strategies evident in business life clearly had an impact on the economic life. The collaborative networks which transcended generations, national boundaries and structures also uncover, as far as the elites are concerned, serious problems in comparative studies conducted from purely national premises. As the same influential families and persons in effect held several leading positions in society, the line would blur between public and invisible uses of power. The power networks thus aimed to build monopolies to secure their key positions at the helm. This study therefore examines the roles of Lennart Gripenberg senator, business executive, superintendent of the Department of Industry, factory inspector, and founding member of industrial interest groups as part of the reproduction strategies of the elite. The family and other networks of the powerful leaders of society, distinguished by social, economic and cultural capital, provided a solid backdrop for the so-called old elites in their quest for strategies to reproducing power in a changing world. Crucially, it was easier for the elites to gain expertise to steer the modernization process and thereby secure for the next generation a leading position in society, something that they traditionally, too, had had the greatest interest in.
Resumo:
In recent years many sorghum producers in the more marginal (<600 mm annual rainfall) cropping areas of Qld and northern NSW have utilised skip row configurations in an attempt to improve yield reliability and reduce sorghum production risk. But will this work in the long run? What are the trade-offs between productivity and risk of crop failure? This paper describes a modelling and simulation approach to study the long-term effects of skip row configurations. Detailed measurements of light interception and water extraction from sorghum crops grown in solid, single and double skip row configurations were collected from three on-farm participatory research trials established in southern Qld and northern NSW. These measurements resulted in changes to the model that accounted for the elliptical water uptake pattern below the crop row and reduced total light interception associated with the leaf area reduction of the skip configuration. Following validation of the model, long-term simulation runs using historical weather data were used to determine the value of skip row sorghum production as a means of maintaining yield reliability in the dryland cropping regions of southern Qld and northern NSW.
Resumo:
Attention is directed at land application of piggery effluent (containing urine, faeces, water, and wasted feed) as a potential source of water resource contamination with phosphorus (P). This paper summarises P-related properties of soil from 0-0.05 m depth at 11 piggery effluent application sites, in order to explore the impact that effluent application has had on the potential for run-off transport of P. The sites investigated were situated on Alfisol, Mollisol, Vertisol, and Spodosol soils in areas that received effluent for 1.5-30 years (estimated effluent-P applications of 100-310000 kg P/ha in total). Total (PT), bicarbonate extractable (PB), and soluble P forms were determined for the soil (0-0.05 m) at paired effluent and no-effluent sites, as well as texture, oxalate-extractable Fe and Al, organic carbon, and pH. All forms of soil P at 0-0.05 m depth increased with effluent application (PB at effluent sites was 1.7-15 times that at no-effluent sites) at 10 of the 11 sites. Increases in PB were strongly related to net P applications (regression analysis of log values for 7 sites with complete data sets: 82.6 % of variance accounted for, p <0.01). Effluent irrigation tended to increase the proportion of soil PT in dilute CaCl2-extractable forms (PTC: effluent average 2.0 %; no-effluent average 0.6%). The proportion of PTC in non-molybdate reactive forms (centrifuged supernatant) decreased (no-effluent average, 46.4 %; effluent average, 13.7 %). Anaerobic lagoon effluent did not reliably acidify soil, since no consistent relationship was observed for pH with effluent application. Soil organic carbon was increased in most of the effluent areas relative to the no-effluent areas. The four effluent areas where organic carbon was reduced had undergone intensive cultivation and cropping. Current effluent management at many of the piggeries failed to maximise the potential for waste P recapture. Ten of the case-study effluent application areas have received effluent-P in excess of crop uptake. While this may not represent a significant risk of leaching where sorption retains P, it has increased the risk of transport of P by run-off. Where such sites are close to surface water, run-off P loads should be managed.
Resumo:
This paper proposes solutions to three issues pertaining to the estimation of finite mixture models with an unknown number of components: the non-identifiability induced by overfitting the number of components, the mixing limitations of standard Markov Chain Monte Carlo (MCMC) sampling techniques, and the related label switching problem. An overfitting approach is used to estimate the number of components in a finite mixture model via a Zmix algorithm. Zmix provides a bridge between multidimensional samplers and test based estimation methods, whereby priors are chosen to encourage extra groups to have weights approaching zero. MCMC sampling is made possible by the implementation of prior parallel tempering, an extension of parallel tempering. Zmix can accurately estimate the number of components, posterior parameter estimates and allocation probabilities given a sufficiently large sample size. The results will reflect uncertainty in the final model and will report the range of possible candidate models and their respective estimated probabilities from a single run. Label switching is resolved with a computationally light-weight method, Zswitch, developed for overfitted mixtures by exploiting the intuitiveness of allocation-based relabelling algorithms and the precision of label-invariant loss functions. Four simulation studies are included to illustrate Zmix and Zswitch, as well as three case studies from the literature. All methods are available as part of the R package Zmix, which can currently be applied to univariate Gaussian mixture models.