96 resultados para Unified Formulation
Resumo:
We present a method for topological SLAM that specifically targets loop closing for edge-ordered graphs. Instead of using a heuristic approach to accept or reject loop closing, we propose a probabilistically grounded multi-hypothesis technique that relies on the incremental construction of a map/state hypothesis tree. Loop closing is introduced automatically within the tree expansion, and likely hypotheses are chosen based on their posterior probability after a sequence of sensor measurements. Careful pruning of the hypothesis tree keeps the growing number of hypotheses under control and a recursive formulation reduces storage and computational costs. Experiments are used to validate the approach.
Resumo:
The service-orientation paradigm has not only become prevalent in the software systems domain in recent years, but is also increasingly applied on the business level to restructure organisational capabilities. In this paper, we present the results of an extensive literature review of 30 approaches related to service identification and analysis for both domains. Based on the consolidation of a superset of comparison criteria for service-oriented methodologies found in related literature, we compare and evaluate the different characteristics of service engineering methods with a focus on service analysis. Although a close business and IT alignment is regarded as one of the core beneficial promises of service-orientation, our analysis suggests that there is a lack of unified, comprehensive methodology for service identification and analysis integrating and addressing both domains. Thus, we discuss how our results can inform directions for future research in this area.
Resumo:
Objectives The objectives of this project were two-fold: • Assess the ease with which current architectural CAD systems supported the use ofparametric descriptions in defining building shape, engineering system performance and cost at the early stages of building design; • Assess the feasibility of implementing a software decision support system that allowed designers to trade-off the characteristics and configuration of various engineering systems to move towards a “global optimum” rather than considering each system in isolation and expecting humans to weigh up all of the costs and benefits. The first stage of the project consisted of using four different CAD systems to define building shells (envelopes) with different usages. These models were then exported into a shared database using the IFC information exchange specifications. The second stage involved the implementation of small computer programs that were able to estimate relevant system parameters based on performance requirements and the constraints imposed by the other systems. These are presented in a unified user interface that extracts the appropriate building shape parameters from the shared database Note that the term parametric in this context refers to the relationships among and between all elements of the building model - not just geometric associations - which will enable the desired coordination.
Resumo:
Reinforced concrete structures are susceptible to a variety of deterioration mechanisms due to creep and shrinkage, alkali-silica reaction (ASR), carbonation, and corrosion of the reinforcement. The deterioration problems can affect the integrity and load carrying capacity of the structure. Substantial research has been dedicated to these various mechanisms aiming to identify the causes, reactions, accelerants, retardants and consequences. This has improved our understanding of the long-term behaviour of reinforced concrete structures. However, the strengthening of reinforced concrete structures for durability has to date been mainly undertaken after expert assessment of field data followed by the development of a scheme to both terminate continuing degradation, by separating the structure from the environment, and strengthening the structure. The process does not include any significant consideration of the residual load-bearing capacity of the structure and the highly variable nature of estimates of such remaining capacity. Development of performance curves for deteriorating bridge structures has not been attempted due to the difficulty in developing a model when the input parameters have an extremely large variability. This paper presents a framework developed for an asset management system which assesses residual capacity and identifies the most appropriate rehabilitation method for a given reinforced concrete structure exposed to aggressive environments. In developing the framework, several industry consultation sessions have been conducted to identify input data required, research methodology and output knowledge base. Capturing expert opinion in a useable knowledge base requires development of a rule based formulation, which can subsequently be used to model the reliability of the performance curve of a reinforced concrete structure exposed to a given environment.
Resumo:
This paper introduces fast algorithms for performing group operations on twisted Edwards curves, pushing the recent speed limits of Elliptic Curve Cryptography (ECC) forward in a wide range of applications. Notably, the new addition algorithm uses for suitably selected curve constants. In comparison, the fastest point addition algorithms for (twisted) Edwards curves stated in the literature use . It is also shown that the new addition algorithm can be implemented with four processors dropping the effective cost to . This implies an effective speed increase by the full factor of 4 over the sequential case. Our results allow faster implementation of elliptic curve scalar multiplication. In addition, the new point addition algorithm can be used to provide a natural protection from side channel attacks based on simple power analysis (SPA).
Resumo:
This paper improves implementation techniques of Elliptic Curve Cryptography. We introduce new formulae and algorithms for the group law on Jacobi quartic, Jacobi intersection, Edwards, and Hessian curves. The proposed formulae and algorithms can save time in suitable point representations. To support our claims, a cost comparison is made with classic scalar multiplication algorithms using previous and current operation counts. Most notably, the best speeds are obtained from Jacobi quartic curves which provide the fastest timings for most scalar multiplication strategies benefiting from the proposed 12M + 5S + 1D point doubling and 7M + 3S + 1D point addition algorithms. Furthermore, the new addition algorithm provides an efficient way to protect against side channel attacks which are based on simple power analysis (SPA). Keywords: Efficient elliptic curve arithmetic,unified addition, side channel attack.
Resumo:
This chapter describes how the YAWL meta-model was extended to support the definition of variation points. These variation points can be used to describe different variants of a YAWL process model in a unified, configurable model. The model can then be configured to suit the needs of specific settings, e.g. for a new organization of project.
Resumo:
This paper reviews the main development of approaches to modelling urban public transit users’ route choice behaviour from 1960s to the present. The approaches reviewed include the early heuristic studies on finding the least cost transit route and all-or-nothing transit assignment, the bus common line problem and corresponding network representation methods, the disaggregate discrete choice models which are based on random utility maximization assumptions, the deterministic use equilibrium and stochastic user equilibrium transit assignment models, and the recent dynamic transit assignment models using either frequency or schedule based network formulation. In addition to reviewing past outcomes, this paper also gives an outlook into the possible future directions of modelling transit users’ route choice behaviour. Based on the comparison with the development of models for motorists’ route choice and traffic assignment problems in an urban road area, this paper points out that it is rewarding for transit route choice research to draw inspiration from the intellectual outcomes out of the road area. Particularly, in light of the recent advancement of modelling motorists’ complex road route choice behaviour, this paper advocates that the modelling practice of transit users’ route choice should further explore the complexities of the problem.
Resumo:
There has been increasing reliance on mechanical heating, ventilation and air-conditioning (HVAC) systems to achieve thermal comfort in office buildings. The use of universal standards for thermal comfort adopted in air-conditioned spaces often results in a large disparity between mean daily external summer temperatures and temperatures experienced indoors. The extensive overuse of air-conditioning in warm climates not only isolates us from the vagaries of the external environment, but is generally dependent on non-renewable energy. A pilot study conducted at the Queensland University of Technology (QUT) involved altering the thermostat set-points to two or three degrees above the normal summer setting in two air-conditioned buildings during the subtropical summer. This paper presents the findings of the research that led to the formulation of the test study. The findings of the test study are printed in the companion paper DES 72: Adjusting Building Thermastats for Environmental Gains – a Pilot Study.
Resumo:
The Resource Based View (RBV) of strategic management has been criticized for relying on inconsistent assumptions of rationality, and mutually inconsistent underlying hypotheses. In this paper, I outline how these critiques can be addressed by re-building RBV on a sense-making foundation. The core notions from sense-making of bounded cognition, retrospective sense-making, incrementalism, loose coupling, causal maps and organizational paradigm are introduced. These are then used to propose a re-construction of key RBV constructs, extending some conceptual discussions, and providing for a conceptually consistent formulation. Implications for the use of RBV as a theory and future research are discussed.
Resumo:
This report focuses on risk-assessment practices in the private rental market, with particular consideration of their impact on low-income renters. It is based on the fieldwork undertaken in the second stage of the research process that followed completion of the Positioning Paper. The key research question this study addressed was: What are the various factors included in ‘risk-assessments’ by real estate agents in allocating ‘affordable’ tenancies? How are these risks quantified and managed? What are the key outcomes of their decision-making? The study builds on previous research demonstrating that a relatively large proportion of low-cost private rental accommodation is occupied by moderate- to high-income households (Wulff and Yates 2001; Seelig 2001; Yates et al. 2004). This is occurring in an environment where the private rental sector is now the de facto main provider of rental housing for lower-income households across Australia (Seelig et al. 2005) and where a number of factors are implicated in patterns of ‘income–rent mismatching’. These include ongoing shifts in public housing assistance; issues concerning eligibility for rent assistance; ‘supply’ factors, such as loss of low-cost rental stock through upgrading and/or transfer to owner-occupied housing; patterns of supply and demand driven largely by middle- to high-income owner-investors and renters; and patterns of housing need among low-income households for whom affordable housing is not appropriate. In formulating a way of approaching the analysis of ‘risk-assessment’ in rental housing management, this study has applied three sociological perspectives on risk: Beck’s (1992) formulation of risk society as entailing processes of ‘individualisation’; a socio-cultural perspective which emphasises the situated nature of perceptions of risk; and a perspective which has drawn attention to different modes of institutional governance of subjects, as ‘carriers of specific indicators of risk’. The private rental market was viewed as a social institution, and the research strategy was informed by ‘institutional ethnography’ as a method of enquiry. The study was based on interviews with property managers, real estate industry representatives, tenant advocates and community housing providers. The primary focus of inquiry was on ‘the moment of allocation’. Six local areas across metropolitan and regional Queensland, New South Wales, and South Australia were selected as case study localities. In terms of the main findings, it is evident that access to private rental housing is not just a matter of ‘supply and demand’. It is also about assessment of risk among applicants. Risk – perceived or actual – is thus a critical factor in deciding who gets housed, and how. Risk and its assessment matter in the context of housing provision and in the development of policy responses. The outcomes from this study also highlight a number of salient points: 1.There are two principal forms of risk associated with property management: financial risk and risk of litigation. 2. Certain tenant characteristics and/or circumstances – ability to pay and ability to care for the rented property – are the main factors focused on in assessing risk among applicants for rental housing. Signals of either ‘(in)ability to pay’ and/or ‘(in)ability to care for the property’ are almost always interpreted as markers of high levels of risk. 3. The processing of tenancy applications entails a complex and variable mix of formal and informal strategies of risk-assessment and allocation where sorting (out), ranking, discriminating and handing over characterise the process. 4. In the eyes of property managers, ‘suitable’ tenants can be conceptualised as those who are resourceful, reputable, competent, strategic and presentable. 5. Property managers clearly articulated concern about risks entailed in a number of characteristics or situations. Being on a low income was the principal and overarching factor which agents considered. Others included: - unemployment - ‘big’ families; sole parent families - domestic violence - marital breakdown - shift from home ownership to private rental - Aboriginality and specific ethnicities - physical incapacity - aspects of ‘presentation’. The financial vulnerability of applicants in these groups can be invoked, alongside expressed concerns about compromised capacities to manage income and/or ‘care for’ the property, as legitimate grounds for rejection or a lower ranking. 6. At the level of face-to-face interaction between the property manager and applicants, more intuitive assessments of risk based upon past experience or ‘gut feelings’ come into play. These judgements are interwoven with more systematic procedures of tenant selection. The findings suggest that considerable ‘risk’ is associated with low-income status, either directly or insofar as it is associated with other forms of perceived risk, and that such risks are likely to impede access to the professionally managed private rental market. Detailed analysis suggests that opportunities for access to housing by low-income householders also arise where, for example: - the ‘local experience’ of an agency and/or property manager works in favour of particular applicants - applicants can demonstrate available social support and financial guarantors - an applicant’s preference or need for longer-term rental is seen to provide a level of financial security for the landlord - applicants are prepared to agree to specific, more stringent conditions for inspection of properties and review of contracts - the particular circumstances and motivations of landlords lead them to consider a wider range of applicants - In particular circumstances, property managers are prepared to give special consideration to applicants who appear worthy, albeit ‘risky’. The strategic actions of demonstrating and documenting on the part of vulnerable (low-income) tenant applicants can improve their chances of being perceived as resourceful, capable and ‘savvy’. Such actions are significant because they help to persuade property managers not only that the applicant may have sufficient resources (personal and material) but that they accept that the onus is on themselves to show they are reputable, and that they have valued ‘competencies’ and understand ‘how the system works’. The parameters of the market do shape the processes of risk-assessment and, ultimately, the strategic relation of power between property manager and the tenant applicant. Low vacancy rates and limited supply of lower-cost rental stock, in all areas, mean that there are many more tenant applicants than available properties, creating a highly competitive environment for applicants. The fundamental problem of supply is an aspect of the market that severely limits the chances of access to appropriate and affordable housing for low-income rental housing applicants. There is recognition of the impact of this problem of supply. The study indicates three main directions for future focus in policy and program development: providing appropriate supports to tenants to access and sustain private rental housing, addressing issues of discrimination and privacy arising in the processes of selecting suitable tenants, and addressing problems of supply.
Resumo:
The title of this book, Hard Lesson: Reflections on Crime control in Late Modernity, contains a number of clues about its general theoretical direction. It is a book concerned, fist and foremost, with the vagaries of crime control in western neo-liberal and English speaking countries. More specifically, Hard Lessons draws attention to a number of examples in which discrete populations – those who have in one way or another offended against the criminal law - have become the subjects of various forms of stare intervention, regulation and control. We are concerned most of all with the ways in which recent criminal justice policies and practices have resulted in what are variously described as unintended consequences, unforeseen outcomes, unanticipated results, counter-productive effects or negative side effects. At their simplest, such terms refer to the apparent gulf between intention and outcome; they often form the basis for considerable amount of policy reappraisal, soul searching and even nihilistic despair among the mamandirns of crime control. Unintended consequences can, of course, be both positive and negative. Occasionally, crime control measures may result in beneficial outcomes, such as the use of DNA to acquit wrongly convicted prisoners. Generally, however, unforeseen effects tend to be negative and even entirely counterproductive, and/or directly opposite to what were originally intended. All this, of course, presupposes some sort of rational, well meaning and transparent policy making process so beloved by liberal social policy theorists. Yet, as Judith Bessant points out in her chapter, this view of policy formulation tends to obscure the often covert, regulatory and downright malevolent intentions contained in many government policies and practices. Indeed, history is replete with examples of governments seeking to mask their real aims from a prying public eye. Denials and various sorts of ‘techniques of neutralisation’ serve to cloak the real or ‘underlying’ aims of the powerful (Cohen 2000). The latest crop of ‘spin doctors’ and ‘official spokespersons’ has ensured that the process of governmental obfuscation, distortion and concealment remains deeply embedded in neo-liberal forms of governance. There is little new or surprising in this; nor should we be shocked when things ‘go wrong’ in the domain of crime control since many unintended consequences are, more often than not, quite predictable. Prison riots, high rates of recidivism and breaches of supervision orders, expansion rather than contraction of control systems, laws that create the opposite of what was intended – all these are normative features of western crime control. Indeed, without the deep fault lines running between policy and outcome it would be hard to imagine what many policy makers, administrators and practitioners would do: their day to day work practices and (and incomes) are directly dependent upon emergent ‘service delivery’ problems. Despite recurrent howls of official anguish and occasional despondency it is apparent that those involved in the propping up the apparatus of crime control have a vested interest in ensuring that polices and practices remain in an enduring state of review and reform.
Resumo:
There is little evidence, historical or otherwise, to suggest that the needs of people and societies change greatly over time. Whilst acknowledging the benefits of the many recent technological innovations that are part of the contemporary milieu, I am reluctant to see such advances as sufficient rationale for the dismantling of the social contract between a government and its citizenry. The Multilateral Agreement on Investment (MAI) highlights the move amongst developed countries to replace a national policy focus with a multilateral approach to global policy formulation that transcends the sovereignty of nation states. The purpose of this paper is to refute the assumptions underpinning multilateralist assertions that government has a diminishing role to play in the global society, and that national sovereignty, due to the increasingly important role of multilateral agreements and the global economy, is ‘a thing of the past’ (Arthur Asher, background briefing interview, Radio National, February 1, 1998). The basic premises that underpin the globalist argument1 for the diminishing role of government are that: • Economic growth increases jobs, prosperity, and freedom. • Free trade is an imperative for successful globalisation because financial sector performance - which depends on deregulation - is integral to global economic growth. • Information technology is revolutionising global trade and making globalisation inevitable. • Globalisation through deregulation, makes national boundaries meaningless, and therefore, national regulatory policies anachronistic. This paper compares the aforementioned axiomatic premises of globalisation to actual outcomes, events, and trends in the real world.
Resumo:
The purpose of this paper is to assess aspects of the British Government's attempts to use sporting participation as a vehicle to re-integrate socially disadvantaged, excluded and 'at-risk' youth into mainstream society. A number of organisations, policy-makers, commentators, and practitioners with a stake in the 'sport and social inclusion agenda' were interviewed. General agreement was found on a number of points: that the field was overly crowded with policies, programmes and initiatives; that the field worked in a 'bottom-up' way, with the most significant factor determining success being effective local workers with good networks and cultural access; that the dichotomising rhetoric of inclusion/exclusion was counter-productive; that the notion of the 'at-risk youth' was problematic and unhelpful; and that they all now dealt with a marketplace, where 'clients' had to be enrolled in their own reformation. There was also disagreement on a number of points: that policy acts as a relatively accurate template for practice, as opposed to the argument that it was simply regarded as a cluster of suggestions for practice; that policy was exceptionally piecemeal in its formulation and application, as opposed to regarding policy as necessarily targeted and dispersed; and that the inclusion agenda was largely politically driven and transitory, as opposed to the optimistic view that it had become ingrained in local practice. Finally, the paper examines some issues that are the most likely points of contribution by researchers in the area: that more research needs to be done on the processes of identity formation associated with participation in sport; that more effective programme evaluation needs to be done for such forms of governmental intervention to work properly; and that the relationship between different kinds of physical activity and social and personal change needs to be more thoroughly theorised.
Resumo:
An information filtering (IF) system monitors an incoming document stream to find the documents that match the information needs specified by the user profiles. To learn to use the user profiles effectively is one of the most challenging tasks when developing an IF system. With the document selection criteria better defined based on the users’ needs, filtering large streams of information can be more efficient and effective. To learn the user profiles, term-based approaches have been widely used in the IF community because of their simplicity and directness. Term-based approaches are relatively well established. However, these approaches have problems when dealing with polysemy and synonymy, which often lead to an information overload problem. Recently, pattern-based approaches (or Pattern Taxonomy Models (PTM) [160]) have been proposed for IF by the data mining community. These approaches are better at capturing sematic information and have shown encouraging results for improving the effectiveness of the IF system. On the other hand, pattern discovery from large data streams is not computationally efficient. Also, these approaches had to deal with low frequency pattern issues. The measures used by the data mining technique (for example, “support” and “confidences”) to learn the profile have turned out to be not suitable for filtering. They can lead to a mismatch problem. This thesis uses the rough set-based reasoning (term-based) and pattern mining approach as a unified framework for information filtering to overcome the aforementioned problems. This system consists of two stages - topic filtering and pattern mining stages. The topic filtering stage is intended to minimize information overloading by filtering out the most likely irrelevant information based on the user profiles. A novel user-profiles learning method and a theoretical model of the threshold setting have been developed by using rough set decision theory. The second stage (pattern mining) aims at solving the problem of the information mismatch. This stage is precision-oriented. A new document-ranking function has been derived by exploiting the patterns in the pattern taxonomy. The most likely relevant documents were assigned higher scores by the ranking function. Because there is a relatively small amount of documents left after the first stage, the computational cost is markedly reduced; at the same time, pattern discoveries yield more accurate results. The overall performance of the system was improved significantly. The new two-stage information filtering model has been evaluated by extensive experiments. Tests were based on the well-known IR bench-marking processes, using the latest version of the Reuters dataset, namely, the Reuters Corpus Volume 1 (RCV1). The performance of the new two-stage model was compared with both the term-based and data mining-based IF models. The results demonstrate that the proposed information filtering system outperforms significantly the other IF systems, such as the traditional Rocchio IF model, the state-of-the-art term-based models, including the BM25, Support Vector Machines (SVM), and Pattern Taxonomy Model (PTM).