973 resultados para core-level
Resumo:
Work and the Welfare State places street-level organizations at the analytic center of welfare state politics, policy and management. This volume offers a critical examination of efforts to change the welfare state to a workfare state by looking at on-the-ground issues in six countries: the United States, United Kingdom, Australia, Denmark, Germany and the Netherlands.
Resumo:
The rapid development of the World Wide Web has created massive information leading to the information overload problem. Under this circumstance, personalization techniques have been brought out to help users in finding content which meet their personalized interests or needs out of massively increasing information. User profiling techniques have performed the core role in this research. Traditionally, most user profiling techniques create user representations in a static way. However, changes of user interests may occur with time in real world applications. In this research we develop algorithms for mining user interests by integrating time decay mechanisms into topic-based user interest profiling. Time forgetting functions will be integrated into the calculation of topic interest measurements on in-depth level. The experimental study shows that, considering temporal effects of user interests by integrating time forgetting mechanisms shows better performance of recommendation.
Resumo:
Recent modelling of socio-economic costs by the Australian railway industry in 2010 has estimated the cost of level crossing accidents to exceed AU$116 million annually. To better understand causal factors that contribute to these accidents, the Cooperative Research Centre for Rail Innovation is running a project entitled Baseline Level Crossing Video. The project aims to improve the recording of level crossing safety data by developing an intelligent system capable of detecting near-miss incidents and capturing quantitative data around these incidents. To detect near-miss events at railway level crossings a video analytics module is being developed to analyse video footage obtained from forward-facing cameras installed on trains. This paper presents a vision base approach for the detection of these near-miss events. The video analytics module is comprised of object detectors and a rail detection algorithm, allowing the distance between a detected object and the rail to be determined. An existing publicly available Histograms of Oriented Gradients (HOG) based object detector algorithm is used to detect various types of vehicles in each video frame. As vehicles are usually seen from a sideway view from the cabin’s perspective, the results of the vehicle detector are verified using an algorithm that can detect the wheels of each detected vehicle. Rail detection is facilitated using a projective transformation of the video, such that the forward-facing view becomes a bird’s eye view. Line Segment Detector is employed as the feature extractor and a sliding window approach is developed to track a pair of rails. Localisation of the vehicles is done by projecting the results of the vehicle and rail detectors on the ground plane allowing the distance between the vehicle and rail to be calculated. The resultant vehicle positions and distance are logged to a database for further analysis. We present preliminary results regarding the performance of a prototype video analytics module on a data set of videos containing more than 30 different railway level crossings. The video data is captured from a journey of a train that has passed through these level crossings.
Resumo:
In this paper we focus specifically on explaining variation in core human values, and suggest that individual differences in values can be partially explained by personality traits and the perceived ability to manage emotions in the self and others (i.e. trait emotional intelligence). A sample of 209 university students was used to test hypotheses regarding several proposed direct and indirect relationships between personality traits, trait emotional intelligence and values. Consistent with the hypotheses, Harm Avoidance and Novelty Seeking were found to directly predict Hedonism, Conformity, and Stimulation. Harm Avoidance was also found to indirectly predict these values through the mediating effects of key subscales of trait emotional intelligence. Novelty Seeking was not found to be an indirect predictor of values. Results have implications for our understanding of the relationship between personality, trait emotional intelligence and values, and suggest a common basis in terms of approach and avoidance pathways.
Resumo:
It is widely acknowledged that effective asset management requires an interdisciplinary approach, in which synergies should exist between traditional disciplines such as: accounting, engineering, finance, humanities, logistics, and information systems technologies. Asset management is also an important, yet complex business practice. Business process modelling is proposed as an approach to manage the complexity of asset management through the modelling of asset management processes. A sound foundation for the systematic application and analysis of business process modelling in asset management is, however, yet to be developed. Fundamentally, a business process consists of activities (termed functions), events/states, and control flow logic. As both events/states and control flow logic are somewhat dependent on the functions themselves, it is a logical step to first identify the functions within a process. This research addresses the current gap in knowledge by developing a method to identify functions common to various industry types (termed core functions). This lays the foundation to extract such functions, so as to identify both commonalities and variation points in asset management processes. This method describes the use of a manual text mining and a taxonomy approach. An example is presented.
Resumo:
Agent-based modelling (ABM), like other modelling techniques, is used to answer specific questions from real world systems that could otherwise be expensive or impractical. Its recent gain in popularity can be attributed to some degree to its capacity to use information at a fine level of detail of the system, both geographically and temporally, and generate information at a higher level, where emerging patterns can be observed. This technique is data-intensive, as explicit data at a fine level of detail is used and it is computer-intensive as many interactions between agents, which can learn and have a goal, are required. With the growing availability of data and the increase in computer power, these concerns are however fading. Nonetheless, being able to update or extend the model as more information becomes available can become problematic, because of the tight coupling of the agents and their dependence on the data, especially when modelling very large systems. One large system to which ABM is currently applied is the electricity distribution where thousands of agents representing the network and the consumers’ behaviours are interacting with one another. A framework that aims at answering a range of questions regarding the potential evolution of the grid has been developed and is presented here. It uses agent-based modelling to represent the engineering infrastructure of the distribution network and has been built with flexibility and extensibility in mind. What distinguishes the method presented here from the usual ABMs is that this ABM has been developed in a compositional manner. This encompasses not only the software tool, which core is named MODAM (MODular Agent-based Model) but the model itself. Using such approach enables the model to be extended as more information becomes available or modified as the electricity system evolves, leading to an adaptable model. Two well-known modularity principles in the software engineering domain are information hiding and separation of concerns. These principles were used to develop the agent-based model on top of OSGi and Eclipse plugins which have good support for modularity. Information regarding the model entities was separated into a) assets which describe the entities’ physical characteristics, and b) agents which describe their behaviour according to their goal and previous learning experiences. This approach diverges from the traditional approach where both aspects are often conflated. It has many advantages in terms of reusability of one or the other aspect for different purposes as well as composability when building simulations. For example, the way an asset is used on a network can greatly vary while its physical characteristics are the same – this is the case for two identical battery systems which usage will vary depending on the purpose of their installation. While any battery can be described by its physical properties (e.g. capacity, lifetime, and depth of discharge), its behaviour will vary depending on who is using it and what their aim is. The model is populated using data describing both aspects (physical characteristics and behaviour) and can be updated as required depending on what simulation is to be run. For example, data can be used to describe the environment to which the agents respond to – e.g. weather for solar panels, or to describe the assets and their relation to one another – e.g. the network assets. Finally, when running a simulation, MODAM calls on its module manager that coordinates the different plugins, automates the creation of the assets and agents using factories, and schedules their execution which can be done sequentially or in parallel for faster execution. Building agent-based models in this way has proven fast when adding new complex behaviours, as well as new types of assets. Simulations have been run to understand the potential impact of changes on the network in terms of assets (e.g. installation of decentralised generators) or behaviours (e.g. response to different management aims). While this platform has been developed within the context of a project focussing on the electricity domain, the core of the software, MODAM, can be extended to other domains such as transport which is part of future work with the addition of electric vehicles.
Resumo:
This paper highlights the hypercompetitive nature of the current pharmacy landscape in Australia and to suggest either a superior level of differentiation strategy or a focused differentiation strategy targeting a niche market as two viable, alternative business models to cost leadership for small, independent community pharmacies. A description of the Australian health care system is provided as well as background information on the current community pharmacy environment in Australia. The authors propose a differentiation or focused differentiation strategy based on cognitive professional services (CPS) which must be executed well and of a superior quality to competitors' services. Market research to determine the services valued by target customers and that they are willing to pay for is vital. To achieve the superior level of quality that will engender high patient satisfaction levels and loyalty, pharmacy owners and managers need to develop, maintain and clearly communicate service quality specifications to the staff delivering these services. Otherwise, there will be a proliferation of pharmacies offering the same professional services with no evident service differential. However, to sustain competitive advantage over the long-term, these smaller, independent community pharmacies will need to exploit a broad core competency base in order to be able to continuously introduce new sources of competitive advantage. With the right expertise, the authors argue that smaller, independent community pharmacies can successfully deliver CPS and sustain profitability in a hypercompetitive market.
Resumo:
Prescribing errors remain a significant cause of patient harm. Safe prescribing is not just about writing a prescription, but involves many cognitive and decision-making steps. A set of national prescribing competencies for all prescribers (including non-medical) is needed to guide education and training curricula, assessment and credentialing of individual practitioners. We have identified 12 core competencies for safe prescribing which embody the four stages of the prescribing process – information gathering, clinical decision making, communication, and monitoring and review. These core competencies, along with their learning objectives and assessment methods, provide a useful starting point for teaching safe and effective prescribing.
Resumo:
The introduction of safety technologies into complex socio-technical systems requires an integrated and holistic approach to HF and engineering, considering the effects of failures not only within system boundaries, but also at the interfaces with other systems and humans. Level crossing warning devices are examples of such systems where technically safe states within the system boundary can influence road user performance, giving rise to other hazards that degrade safety of the system. Chris will discuss the challenges that have been encountered to date in developing a safety argument in support of low-cost level crossing warning devices. The design and failure modes of level crossing warning devices are known to have a significant influence on road user performance; however, quantifying this effect is one of the ongoing challenges in determining appropriate reliability and availability targets for low-cost level crossing warning devices.
Resumo:
Reconfigurable computing devices can increase the performance of compute intensive algorithms by implementing application specific co-processor architectures. The power cost for this performance gain is often an order of magnitude less than that of modern CPUs and GPUs. Exploiting the potential of reconfigurable devices such as Field-Programmable Gate Arrays (FPGAs) is typically a complex and tedious hardware engineering task. Re- cently the major FPGA vendors (Altera, and Xilinx) have released their own high-level design tools, which have great potential for rapid development of FPGA based custom accelerators. In this paper, we will evaluate Altera’s OpenCL Software Development Kit, and Xilinx’s Vivado High Level Sythesis tool. These tools will be compared for their per- formance, logic utilisation, and ease of development for the test case of a Tri-diagonal linear system solver.
Associations between area-level disadvantage and DMFT among a birth cohort of Indigenous Australians
Resumo:
Background Individual-level factors influence DMFT, but little is known about the influence of community environment. This study examines associations between community-level influences and DMFT among a birth cohort of Indigenous Australians aged 16–20 years. Methods Data were collected as part of Wave 3 of the Aboriginal Birth Cohort study. Fifteen community areas were established and the sample comprised 442 individuals. The outcome variable was mean DMFT with explanatory variables including diet and community disadvantage (access to services, infrastructure and communications). Data were analysed using multilevel regression modelling. Results In a null model, 13.8% of the total variance in mean DMFT was between community areas, which increased to 14.3% after adjusting for sex, age and diet. Addition of the community disadvantage variable decreased the variance between areas by 4.8%, indicating that community disadvantage explained one-third of the area-level variance. Residents of under-resourced communities had significantly higher mean DMFT (β=3.86, 95% CI 0.02¬, 7.70) after adjusting for sex, age and diet. Conclusions Living in under-resourced communities was associated with greater DMFT among this disadvantaged population, indicating that policies aiming to reduce oral health-related inequalities among vulnerable groups may benefit from taking into account factors external to individual-level influences.
Resumo:
Electricity network investment and asset management require accurate estimation of future demand in energy consumption within specified service areas. For this purpose, simple models are typically developed to predict future trends in electricity consumption using various methods and assumptions. This paper presents a statistical model to predict electricity consumption in the residential sector at the Census Collection District (CCD) level over the state of New South Wales, Australia, based on spatial building and household characteristics. Residential household demographic and building data from the Australian Bureau of Statistics (ABS) and actual electricity consumption data from electricity companies are merged for 74 % of the 12,000 CCDs in the state. Eighty percent of the merged dataset is randomly set aside to establish the model using regression analysis, and the remaining 20 % is used to independently test the accuracy of model prediction against actual consumption. In 90 % of the cases, the predicted consumption is shown to be within 5 kWh per dwelling per day from actual values, with an overall state accuracy of -1.15 %. Given a future scenario with a shift in climate zone and a growth in population, the model is used to identify the geographical or service areas that are most likely to have increased electricity consumption. Such geographical representation can be of great benefit when assessing alternatives to the centralised generation of energy; having such a model gives a quantifiable method to selecting the 'most' appropriate system when a review or upgrade of the network infrastructure is required.
Resumo:
This paper proposes the use of a common DC link in residential buildings to allow customers to inject their surplus power that otherwise would be limited due to AC power quality violation. The surplus power can easily be transferred to other phases and feeders through common DC link in order to maintain the balance between generated power and load. PSCAD-EMTDC platform is used to simulate and study the proposed approach. This paper suggests that this structure can be a pathway to the future DC power systems.
Resumo:
To minimise the number of load sheddings in a microgrid (MG) during autonomous operation, islanded neighbour MGs can be interconnected if they are on a self-healing network and an extra generation capacity is available in the distributed energy resources (DER) of one of the MGs. In this way, the total load in the system of interconnected MGs can be shared by all the DERs within those MGs. However, for this purpose, carefully designed self-healing and supply restoration control algorithm, protection systems and communication infrastructure are required at the network and MG levels. In this study, first, a hierarchical control structure is discussed for interconnecting the neighbour autonomous MGs where the introduced primary control level is the main focus of this study. Through the developed primary control level, this study demonstrates how the parallel DERs in the system of multiple interconnected autonomous MGs can properly share the load of the system. This controller is designed such that the converter-interfaced DERs operate in a voltage-controlled mode following a decentralised power sharing algorithm based on droop control. DER converters are controlled based on a per-phase technique instead of a conventional direct-quadratic transformation technique. In addition, linear quadratic regulator-based state feedback controllers, which are more stable than conventional proportional integrator controllers, are utilised to prevent instability and weak dynamic performances of the DERs when autonomous MGs are interconnected. The efficacy of the primary control level of the DERs in the system of multiple interconnected autonomous MGs is validated through the PSCAD/EMTDC simulations considering detailed dynamic models of DERs and converters.
Resumo:
Capability development is at the heart of creating competitive advantage. This thesis intends to conceptualise Strategic Capability Development as a renewal of an organisation's existing capability in line with the requirements of the market. It followed and compared four product innovation projects within Iran Khodro Company (IKCO), an exemplar of capability development within the Iranian Auto industry. Findings show that the maturation of strategic capability at the organisational level has occurred through a sequence of product innovation projects and by dynamically shaping the learning and knowledge integration processes in accordance with emergence of the new structure within the industry. Accordingly, Strategic Capability Development is conceptualised in an interpretive model. Such findings are useful for development of an explanatory model and a practical capability development framework for managing learning and knowledge across different product innovation projects.