45 resultados para forcing
Resumo:
A BPMN model is well-structured if splits and joins are always paired into single-entry-single-exit blocks. Well-structuredness is often a desirable property as it promotes readability and makes models easier to analyze. However, many process models found in practice are not well-structured, and it is not always feasible or even desirable to restrict process modelers to produce only well-structured models. Also, not all processes can be captured as well-structured process models. An alternative to forcing modelers to produce well-structured models, is to automatically transform unstructured models into well-structured ones when needed and possible. This talk reviews existing results on automatic transformation of unstructured process models into structured ones.
Resumo:
This paper reviews some recent results in motion control of marine vehicles using a technique called Interconnection and Damping Assignment Passivity-based Control (IDA-PBC). This approach to motion control exploits the fact that vehicle dynamics can be described in terms of energy storage, distribution, and dissipation, and that the stable equilibrium points of mechanical systems are those at which the potential energy attains a minima. The control forces are used to transform the closed-loop dynamics into a port-controlled Hamiltonian system with dissipation. This is achieved by shaping the energy-storing characteristics of the system, modifying its interconnection structure (how the energy is distributed), and injecting damping. The end result is that the closed-loop system presents a stable equilibrium (hopefully global) at the desired operating point. By forcing the closed-loop dynamics into a Hamiltonian form, the resulting total energy function of the system serves as a Lyapunov function that can be used to demonstrate stability. We consider the tracking and regulation of fully actuated unmanned underwater vehicles, its extension to under-actuated slender vehicles, and also manifold regulation of under-actuated surface vessels. The paper is concluded with an outlook on future research.
Resumo:
Traditional nearest points methods use all the samples in an image set to construct a single convex or affine hull model for classification. However, strong artificial features and noisy data may be generated from combinations of training samples when significant intra-class variations and/or noise occur in the image set. Existing multi-model approaches extract local models by clustering each image set individually only once, with fixed clusters used for matching with various image sets. This may not be optimal for discrimination, as undesirable environmental conditions (eg. illumination and pose variations) may result in the two closest clusters representing different characteristics of an object (eg. frontal face being compared to non-frontal face). To address the above problem, we propose a novel approach to enhance nearest points based methods by integrating affine/convex hull classification with an adapted multi-model approach. We first extract multiple local convex hulls from a query image set via maximum margin clustering to diminish the artificial variations and constrain the noise in local convex hulls. We then propose adaptive reference clustering (ARC) to constrain the clustering of each gallery image set by forcing the clusters to have resemblance to the clusters in the query image set. By applying ARC, noisy clusters in the query set can be discarded. Experiments on Honda, MoBo and ETH-80 datasets show that the proposed method outperforms single model approaches and other recent techniques, such as Sparse Approximated Nearest Points, Mutual Subspace Method and Manifold Discriminant Analysis.
Resumo:
Existing multi-model approaches for image set classification extract local models by clustering each image set individually only once, with fixed clusters used for matching with other image sets. However, this may result in the two closest clusters to represent different characteristics of an object, due to different undesirable environmental conditions (such as variations in illumination and pose). To address this problem, we propose to constrain the clustering of each query image set by forcing the clusters to have resemblance to the clusters in the gallery image sets. We first define a Frobenius norm distance between subspaces over Grassmann manifolds based on reconstruction error. We then extract local linear subspaces from a gallery image set via sparse representation. For each local linear subspace, we adaptively construct the corresponding closest subspace from the samples of a probe image set by joint sparse representation. We show that by minimising the sparse representation reconstruction error, we approach the nearest point on a Grassmann manifold. Experiments on Honda, ETH-80 and Cambridge-Gesture datasets show that the proposed method consistently outperforms several other recent techniques, such as Affine Hull based Image Set Distance (AHISD), Sparse Approximated Nearest Points (SANP) and Manifold Discriminant Analysis (MDA).
Resumo:
The occurrence of extreme water level events along low-lying, highly populated and/or developed coastlines can lead to devastating impacts on coastal infrastructure. Therefore it is very important that the probabilities of extreme water levels are accurately evaluated to inform flood and coastal management and for future planning. The aim of this study was to provide estimates of present day extreme total water level exceedance probabilities around the whole coastline of Australia, arising from combinations of mean sea level, astronomical tide and storm surges generated by both extra-tropical and tropical storms, but exclusive of surface gravity waves. The study has been undertaken in two main stages. In the first stage, a high-resolution (~10 km along the coast) hydrodynamic depth averaged model has been configured for the whole coastline of Australia using the Danish Hydraulics Institute’s Mike21 modelling suite of tools. The model has been forced with astronomical tidal levels, derived from the TPX07.2 global tidal model, and meteorological fields, from the US National Center for Environmental Prediction’s global reanalysis, to generate a 61-year (1949 to 2009) hindcast of water levels. This model output has been validated against measurements from 30 tide gauge sites around Australia with long records. At each of the model grid points located around the coast, time series of annual maxima and the several highest water levels for each year were derived from the multi-decadal water level hindcast and have been fitted to extreme value distributions to estimate exceedance probabilities. Stage 1 provided a reliable estimate of the present day total water level exceedance probabilities around southern Australia, which is mainly impacted by extra-tropical storms. However, as the meteorological fields used to force the hydrodynamic model only weakly include the effects of tropical cyclones the resultant water levels exceedance probabilities were underestimated around western, northern and north-eastern Australia at higher return periods. Even if the resolution of the meteorological forcing was adequate to represent tropical cyclone-induced surges, multi-decadal periods yielded insufficient instances of tropical cyclones to enable the use of traditional extreme value extrapolation techniques. Therefore, in the second stage of the study, a statistical model of tropical cyclone tracks and central pressures was developed using histroic observations. This model was then used to generate synthetic events that represented 10,000 years of cyclone activity for the Australia region, with characteristics based on the observed tropical cyclones over the last ~40 years. Wind and pressure fields, derived from these synthetic events using analytical profile models, were used to drive the hydrodynamic model to predict the associated storm surge response. A random time period was chosen, during the tropical cyclone season, and astronomical tidal forcing for this period was included to account for non-linear interactions between the tidal and surge components. For each model grid point around the coast, annual maximum total levels for these synthetic events were calculated and these were used to estimate exceedance probabilities. The exceedance probabilities from stages 1 and 2 were then combined to provide a single estimate of present day extreme water level probabilities around the whole coastline of Australia.
Resumo:
A need to respond to changing legislative requirements, rising expectations from customers and shortages of suitably experienced staff are forcing non-profit organisations in the aged care sector to change. As new customer segments emerge and the existing aged care offering becomes less relevant, organisations must rethink the value they present to market, and adopt innovative strategies and approaches to care delivery in order to have a sustainable future. This paper presents a framework for unpacking a customer journey and experience, developed during a longitudinal study of a non-profit organisation redefining their core purpose and attempting to design a customer-centric business model.
Resumo:
Falling sales in Europe and increasing global competition is forcing automotive manufacturers to develop a customer-based approach to differentiate themselves from the similarly technologically-optimised crowd. In spite of this new approach, automotive firms are still firmly entrenched in their reliance upon technology-driven innovation, to design, develop and manufacture their products, placing customer focus on a downstream sales role. However the time-honoured technology-driven approach to vehicle design and manufacture is coming into question, with the increasing importance of accounting for consumer needs pushing automotive engineers to include the user in their designs. The following paper examines the challenges and opportunities for a single global automotive manufacturer that arise in seeking to adopt a user-centred approach to vehicle design amongst technical employees. As part of an embedded case study, engineers from this manufacturer were interviewed in order to gauge the challenges, barriers and opportunities for the adoption of user-centred design tools within the engineering design process. The analysis of these interviews led to the proposal of the need for a new role within automotive manufacturers, the “designeer”, to bridge the divide between designers and engineers and allow the engineering process to transition from a technology-driven to a user- centred approach.
Resumo:
It is well known that, although a uniform magnetic field inhibits the onset of small amplitude thermal convection in a layer of fluid heated from below, isolated convection cells may persist if the fluid motion within them is sufficiently vigorous to expel magnetic flux. Such fully nonlinear(‘‘convecton’’) solutions for magnetoconvection have been investigated by several authors. Here we explore a model amplitude equation describing this separation of a fluid layer into a vigorously convecting part and a magnetically-dominated part at rest. Our analysis elucidates the origin of the scaling laws observed numerically to form the boundaries in parameter space of the region of existence of these localised states, and importantly, for the lowest thermal forcing required to sustain them.
Resumo:
A review of The Author Cat: Clemens's Life in Fiction by Forrest G. Robinson (Fordham UP, 2007). Even at its most basic, guilt forms a counterweight to the hesitancy and unpleasantness of authorship, forcing writers back to the desk when they have come to despise their work. Guilt as task-master is familiar to most, even those to whom more elevated feelings, such as inspiration, make occasional visits. It seems that guilt is effective because writing is so seldom an organic or natural activity - rather, good writing emerges out of unhappy pressures that eventually overwhelm the writer's evasive strategies, from visits to the fridge door to the most sophisticated forms they take, such as when the author creates a narrative persona that claims to have owned up...
Resumo:
Purpose The purpose of this paper is to test a multilevel model of the main and mediating effects of supervisor conflict management style (SCMS) climate and procedural justice (PJ) climate on employee strain. It is hypothesized that workgroup-level climate induced by SCMS can fall into four types: collaborative climate, yielding climate, forcing climate, or avoiding climate; that these group-level perceptions will have differential effects on employee strain, and will be mediated by PJ climate. Design/methodology/approach Multilevel SEM was used to analyze data from 420 employees nested in 61 workgroups. Findings Workgroups that perceived high supervisor collaborating climate reported lower sleep disturbance, job dissatisfaction, and action-taking cognitions. Workgroups that perceived high supervisor yielding climate and high supervisor forcing climate reported higher anxiety/depression, sleep disturbance, job dissatisfaction, and action-taking cognitions. Results supported a PJ climate mediation model when supervisors’ behavior was reported to be collaborative and yielding. Research limitations/implications The cross-sectional research design places limitations on conclusions about causality; thus, longitudinal studies are recommended. Practical implications Supervisor behavior in response to conflict may have far-reaching effects beyond those who are a party to the conflict. The more visible use of supervisor collaborative CMS may be beneficial. Social implications The economic costs associated with workplace conflict may be reduced through the application of these findings. Originality/value By applying multilevel theory and analysis, we extend workplace conflict theory.
Resumo:
As a key element in their response to new media forcing transformations in mass media and media use, newspapers have deployed various strategies to not only establish online and mobile products, and develop healthy business plans, but to set out to be dominant portals. Their response to change was the subject of an early investigation by one of the present authors (Keshvani 2000). That was part of a set of short studies inquiring into what impact new software applications and digital convergence might have on journalism practice (Tickle and Keshvani 2000), and also looking for demonstrations of the way that innovations, technologies and protocols then under development might produce a “wireless, streamlined electronic news production process (Tickle and Keshvani 2001).” The newspaper study compared the online products of The Age in Melbourne and the Straits Times in Singapore. It provided an audit of the Singapore and Australia Information and Communications Technology (ICT) climate concentrating on the state of development of carrier networks, as a determining factor in the potential strength of the two services with their respective markets. In the outcome, contrary to initial expectations, the early cable roll-out and extensive ‘wiring’ of the city in Singapore had not produced a level of uptake of Internet services as strong as that achieved in Melbourne by more ad hoc and varied strategies. By interpretation, while news websites and online content were at an early stage of development everywhere, and much the same as one another, no determining structural imbalance existed to separate these leading media participants in Australia and South-east Asia. The present research revisits that situation, by again studying the online editions of the two large newspapers in the original study, and one other, The Courier Mail, (recognising the diversification of types of product in this field, by including it as a representative of Newscorp, now a major participant). The inquiry works through the principle of comparison. It is an exercise in qualitative, empirical research that establishes a comparison between the situation in 2000 as described in the earlier work, and the situation in 2014, after a decade of intense development in digital technology affecting the media industries. It is in that sense a follow-up study on the earlier work, although this time giving emphasis to content and style of the actual products as experienced by their users. It compares the online and print editions of each of these three newspapers; then the three mastheads as print and online entities, among themselves; and finally it compares one against the other two, as representing a South-east Asian model and Australian models. This exercise is accompanied by a review of literature on the developments in ICT affecting media production and media organisations, to establish the changed context. The new study of the online editions is conducted as a systematic appraisal of the first level, or principal screens, of the three publications, over the course of six days (10-15.2.14 inclusive). For this, categories for analysis were made, through conducting a preliminary examination of the products over three days in the week before. That process identified significant elements of media production, such as: variegated sourcing of materials; randomness in the presentation of items; differential production values among media platforms considered, whether text, video or stills images; the occasional repurposing and repackaging of top news stories of the day and the presence of standard news values – once again drawn out of the trial ‘bundle’ of journalistic items. Reduced in this way the online artefacts become comparable with the companion print editions from the same days. The categories devised and then used in the appraisal of the online products have been adapted to print, to give the closest match of sets of variables. This device, to study the two sets of publications on like standards -- essentially production values and news values—has enabled the comparisons to be made. This comparing of the online and print editions of each of the three publications was set up as up the first step in the investigation. In recognition of the nature of the artefacts, as ones that carry very diverse information by subject and level of depth, and involve heavy creative investment in the formulation and presentation of the information; the assessment also includes an open section for interpreting and commenting on main points of comparison. This takes the form of a field for text, for the insertion of notes, in the table employed for summarising the features of each product, for each day. When the sets of comparisons as outlined above are noted, the process then becomes interpretative, guided by the notion of change. In the context of changing media technology and publication processes, what substantive alterations have taken place, in the overall effort of news organisations in the print and online fields since 2001; and in their print and online products separately? Have they diverged or continued along similar lines? The remaining task is to begin to make inferences from that. Will the examination of findings enforce the proposition that a review of the earlier study, and a forensic review of new models, does provide evidence of the character and content of change --especially change in journalistic products and practice? Will it permit an authoritative description on of the essentials of such change in products and practice? Will it permit generalisation, and provide a reliable base for discussion of the implications of change, and future prospects? Preliminary observations suggest a more dynamic and diversified product has been developed in Singapore, well themed, obviously sustained by public commitment and habituation to diversified online and mobile media services. The Australian products suggest a concentrated corporate and journalistic effort and deployment of resources, with a strong market focus, but less settled and ordered, and showing signs of limitations imposed by the delay in establishing a uniform, large broadband network. The scope of the study is limited. It is intended to test, and take advantage of the original study as evidentiary material from the early days of newspaper companies’ experimentation with online formats. Both are small studies. The key opportunity for discovery lies in the ‘time capsule’ factor; the availability of well-gathered and processed information on major newspaper company production, at the threshold of a transformational decade of change in their industry. The comparison stands to identify key changes. It should also be useful as a reference for further inquiries of the same kind that might be made, and for monitoring of the situation in regard to newspaper portals on line, into the future.
Resumo:
- Background Nilotinib and dasatinib are now being considered as alternative treatments to imatinib as a first-line treatment of chronic myeloid leukaemia (CML). - Objective This technology assessment reviews the available evidence for the clinical effectiveness and cost-effectiveness of dasatinib, nilotinib and standard-dose imatinib for the first-line treatment of Philadelphia chromosome-positive CML. - Data sources Databases [including MEDLINE (Ovid), EMBASE, Current Controlled Trials, ClinicalTrials.gov, the US Food and Drug Administration website and the European Medicines Agency website] were searched from search end date of the last technology appraisal report on this topic in October 2002 to September 2011. - Review methods A systematic review of clinical effectiveness and cost-effectiveness studies; a review of surrogate relationships with survival; a review and critique of manufacturer submissions; and a model-based economic analysis. - Results Two clinical trials (dasatinib vs imatinib and nilotinib vs imatinib) were included in the effectiveness review. Survival was not significantly different for dasatinib or nilotinib compared with imatinib with the 24-month follow-up data available. The rates of complete cytogenetic response (CCyR) and major molecular response (MMR) were higher for patients receiving dasatinib than for those with imatinib for 12 months' follow-up (CCyR 83% vs 72%, p < 0.001; MMR 46% vs 28%, p < 0.0001). The rates of CCyR and MMR were higher for patients receiving nilotinib than for those receiving imatinib for 12 months' follow-up (CCyR 80% vs 65%, p < 0.001; MMR 44% vs 22%, p < 0.0001). An indirect comparison analysis showed no difference between dasatinib and nilotinib for CCyR or MMR rates for 12 months' follow-up (CCyR, odds ratio 1.09, 95% CI 0.61 to 1.92; MMR, odds ratio 1.28, 95% CI 0.77 to 2.16). There is observational association evidence from imatinib studies supporting the use of CCyR and MMR at 12 months as surrogates for overall all-cause survival and progression-free survival in patients with CML in chronic phase. In the cost-effectiveness modelling scenario, analyses were provided to reflect the extensive structural uncertainty and different approaches to estimating OS. First-line dasatinib is predicted to provide very poor value for money compared with first-line imatinib, with deterministic incremental cost-effectiveness ratios (ICERs) of between £256,000 and £450,000 per quality-adjusted life-year (QALY). Conversely, first-line nilotinib provided favourable ICERs at the willingness-to-pay threshold of £20,000-30,000 per QALY. - Limitations Immaturity of empirical trial data relative to life expectancy, forcing either reliance on surrogate relationships or cumulative survival/treatment duration assumptions. - Conclusions From the two trials available, dasatinib and nilotinib have a statistically significant advantage compared with imatinib as measured by MMR or CCyR. Taking into account the treatment pathways for patients with CML, i.e. assuming the use of second-line nilotinib, first-line nilotinib appears to be more cost-effective than first-line imatinib. Dasatinib was not cost-effective if decision thresholds of £20,000 per QALY or £30,000 per QALY were used, compared with imatinib and nilotinib. Uncertainty in the cost-effectiveness analysis would be substantially reduced with better and more UK-specific data on the incidence and cost of stem cell transplantation in patients with chronic CML. - Funding The Health Technology Assessment Programme of the National Institute for Health Research.
Resumo:
- Purpose This study aims to investigate the extent to which employee outcomes (anxiety/depression, bullying and workers’ compensation claims thoughts) are affected by shared perceptions of supervisor conflict management style (CMS). Further, this study aims to assess cross-level moderating effects of supervisor CMS climate on the positive association between relationship conflict and these outcomes. - Design/methodology/approach Multilevel modeling was conducted using a sample of 401 employees nested in 69 workgroups. - Findings High collaborating, low yielding and low forcing climates (positive supervisor climates) were associated with lower anxiety/depression, bullying and claim thoughts. Unexpectedly, the direction of moderation showed that the positive association between relationship conflict and anxiety/depression and bullying was stronger for positive supervisor CMS climates than for negative supervisor CMS climates (low collaborating, high yielding and high forcing). Nevertheless, these interactions revealed that positive supervisor climates were the most effective at reducing anxiety/depression and bullying when relationship conflict was low. For claim thoughts, positive supervisor CMS climates had the predicted stress-buffering effects. - Research limitations/implications Employees benefit from supervisors creating positive CMS climates when dealing with conflict as a third party, and intervening when conflict is low, when their intervention is more likely to minimize anxiety/depression and bullying. - Originality/value By considering the unique perspective of employees’ shared perceptions of supervisor CMS, important implications for the span of influence of supervisor behavior on employee well-being have been indicated.
Complimentary collaborations: Teachers and researchers co-developing best practices in art education
Resumo:
Australia is currently experiencing a huge cultural shift as it moves from a State-based curriculum, to a national education system. The Australian State-based bodies that currently manage teacher registration, teacher education course accreditation, curriculum frameworks and syllabi are often complex organisations that hold conflicting ideologies about education and teaching. The development of a centralised system, complete with a single accreditation body and a national curriculum can be seen as a reaction to this complexity. At the time of writing, the Australian Curriculum is being rolled out in staggered phases across the states and territories of Australia. Phase one has been implemented, introducing English, Mathematics, History and Science. Subsequent phases (Humanities and Social Sciences, the Arts, Technologies, Health and Physical Education, Languages, and year 9-10 work studies) are intended to follow. Forcing an educational shift of this magnitude is no simple task; not least because the States and Territories have and continue to demonstrate varying levels of resistance to winding down their own curricula in favour of new content with its unfamiliar expectations and organisations. The full implementation process is currently far from over, and far from being fully resolved. The Federal Government has initiated a number of strategies to progress the implementation, such as the development of the Australian Institute for Teaching and School Leadership (AITSL) to aid professional educators to implement the new curriculum. AITSL worked with professional and peak specialist bodies to develop Illustrations of Practice (hereafter IoP) for teachers to access and utilise. This paper tells of the building of one IoP, where a graduate teacher and a university lecturer collaborated to construct ideas and strategies to deliver visual arts lessons to early childhood students in a low Socio- Economic Status [SES] regional setting and discusses the experience in terms of its potential for professional learning in art education.
Resumo:
In late 2010, the online nonprofit media organization WikiLeaks published classified documents detailing correspondence between the U.S. State Department and its diplomatic missions around the world, numbering around 250,000 cables. These diplomatic cables contained classified information with comments on world leaders, foreign states, and various international and domestic issues. Negative reactions to the publication of these cables came from both the U.S. political class (which was generally condemnatory of WikiLeaks, invoking national security concerns and the jeopardizing of U.S. interests abroad) and the corporate world, with various companies ceasing to continue to provide services to WikiLeaks despite no legal measure (e.g., a court injunction) forcing them to do so. This article focuses on the legal remedies available to WikiLeaks against this corporate suppression of its speech in the U.S. and Europe since these are the two principle arenas in which the actors concerned are operating. The transatlantic legal protection of free expression will be considered, yet, as will be explained in greater detail, the legal conception of this constitutional and fundamental right comes from a time when the state posed the greater threat to freedom. As a result, it is not generally enforceable against private, non-state entities interfering with speech and expression which is the case here. Other areas of law, namely antitrust/competition, contract and tort will then be examined to determine whether WikiLeaks and its partners can attempt to enforce their right indirectly through these other means. Finally, there will be some concluding thoughts about the implications of the corporate response to the WikiLeaks embassy cables leak for freedom of expression online.