901 resultados para aggregated multicast
Resumo:
This chapter presents a comparative survey of recent key management (key distribution, discovery, establishment and update) solutions for wireless sensor networks. We consider both distributed and hierarchical sensor network architectures where unicast, multicast and broadcast types of communication take place. Probabilistic, deterministic and hybrid key management solutions are presented, and we determine a set of metrics to quantify their security properties and resource usage such as processing, storage and communication overheads. We provide a taxonomy of solutions, and identify trade-offs in these schemes to conclude that there is no one-size-fits-all solution.
Resumo:
Advances in technology introduce new application areas for sensor networks. Foreseeable wide deployment of mission critical sensor networks creates concerns on security issues. Security of large scale densely deployed and infrastructure less wireless networks of resource limited sensor nodes requires efficient key distribution and management mechanisms. We consider distributed and hierarchical wireless sensor networks where unicast, multicast and broadcast type of communications can take place. We evaluate deterministic, probabilistic and hybrid type of key pre-distribution and dynamic key generation algorithms for distributing pair-wise, group-wise and network-wise keys.
Resumo:
The question of whether or not there exists a meaningful economic distinction between quits and layoffs has attracted considerable attention. This paper utilizes a recent test proposed by J. S. Cramer and G. Ridder (1991) to test formally whether quits and layoffs may legitimately be aggregated into a single undifferentiated job-mover category. The paper also estimates wage equations for job stayers, quits, and layoffs, corrected for the endogeneity of job mobility. The major results are that quits and lay-off cannot legitimately be pooled and correction for sample selection would appear to be important.
Resumo:
Chondrocytes dedifferentiate during ex vivo expansion on 2-dimensional surfaces. Aggregation of the expanded cells into 3-dimensional pellets, in the presence of induction factors, facilitates their redifferentiation and restoration of the chondrogenic phenotype. Typically 1×105–5×105 chondrocytes are aggregated, resulting in “macro” pellets having diameters ranging from 1–2 mm. These macropellets are commonly used to study redifferentiation, and recently macropellets of autologous chondrocytes have been implanted directly into articular cartilage defects to facilitate their repair. However, diffusion of metabolites over the 1–2 mm pellet length-scales is inefficient, resulting in radial tissue heterogeneity. Herein we demonstrate that the aggregation of 2×105 human chondrocytes into micropellets of 166 cells each, rather than into larger single macropellets, enhances chondrogenic redifferentiation. In this study, we describe the development of a cost effective fabrication strategy to manufacture a microwell surface for the large-scale production of micropellets. The thousands of micropellets were manufactured using the microwell platform, which is an array of 360×360 µm microwells cast into polydimethylsiloxane (PDMS), that has been surface modified with an electrostatic multilayer of hyaluronic acid and chitosan to enhance micropellet formation. Such surface modification was essential to prevent chondrocyte spreading on the PDMS. Sulfated glycosaminoglycan (sGAG) production and collagen II gene expression in chondrocyte micropellets increased significantly relative to macropellet controls, and redifferentiation was enhanced in both macro and micropellets with the provision of a hypoxic atmosphere (2% O2). Once micropellet formation had been optimized, we demonstrated that micropellets could be assembled into larger cartilage tissues. Our results indicate that micropellet amalgamation efficiency is inversely related to the time cultured as discreet microtissues. In summary, we describe a micropellet production platform that represents an efficient tool for studying chondrocyte redifferentiation and demonstrate that the micropellets could be assembled into larger tissues, potentially useful in cartilage defect repair.
Resumo:
Background There is growing consensus that a multidisciplinary, comprehensive and standardised process for assessing the fitness of older patients for chemotherapy should be undertaken to determine appropriate cancer treatment. Aim This study tested a model of cancer care for the older patient incorporating Comprehensive Geriatric Assessment (CGA), which aimed to ensure that 'fit' individuals amenable to active treatment were accurately identified; 'vulnerable' patients more suitable for modified or supportive regimens were determined; and 'frail 'individuals who would benefit most from palliative regimens were also identified and offered the appropriate level of care. Methods A consecutive-series n=178 sample of patients >65 years was recruited from a major Australian cancer centre. The following instruments were administered by an oncogeriatric nurse prior to treatment: Vulnerable Elders Survey-13; Cumulative Illness Rating Scale (Geriatric); Malnutrition Screening Tool; Mini-mental State Examination; Geriatric Depression Scale; Barthel Index; and Lawton Instrumental Activities of Daily Living Scale. Scores from these instruments were aggregated to predict patient fitness, vulnerability or frailty for chemotherapy. Physicians provided a concurrent (blinded) prediction of patient fitness, vulnerability or frailty based on their clinical assessment. Data were also collected on actual patient outcomes (eg treatment completed as predicted, treatment reduced) during monthly audits of patient trajectories. Data analysis Data analysis is underway. A sample of 178 is adequate to detect, with 90% power, kappa coefficients of agreement between CGA and physician assessments of K>0.90 ("almost perfect agreement"). Primary endpoints comprise a) whether the nurse-led CGA determination of fit, vulnerable or frail agrees with the oncologist's assessments of fit, vulnerable or frail and b) whether the CGA and physician assessments accurately predict actual patient outcomes. Conclusion An oncogeriatric nurse-led model of care is currently being developed from the results. We conclude with a discussion of the pivotal role of nurses in CGA-based models of care.
Resumo:
We have developed an explanation for ultra trace detection found when using Au/Ag SERS nanoparticles linked to biochemical affinity tags, e.g. antibodies. The nanoparticle structure is not as usually assumed and the aggregated nanoparticles constitute hot spots that are indispensable for these very low levels of analyte detection, even more so when using a direct detection method.
Resumo:
FTIR spectra are reported of CO adsorbed on silica-supported copper catalysts prepared from copper(II) acetate monohydrate. Fully oxidised catalyst gave bands due to CO on CuO, isolated Cu2+ cations on silica and anion vacancy sites in CuO. The highly dispersed CuO aggregated on reduction to metal particles which gave bands due to adsorbed CO characteristic of both low-index exposed planes and stepped sites on high-index planes. Partial surface oxidation with N2O or H2O generated Cu+ adsorption sites which were slowly reduced to Cu° by CO at 300 K. Surface carbonate initially formed from CO was also slowly depleted with time with the generation of CO2. The results are consistent with adsorbed carbonate being an intermediate in the water-gas shift reaction of H2O and CO to H2 and CO2.
Resumo:
Background: Random Breath Testing (RBT) is the main drink driving law enforcement tool used throughout Australia. International comparative research considers Australia to have the most successful RBT program compared to other countries in terms of crash reductions (Erke, Goldenbeld, & Vaa, 2009). This success is attributed to the programs high intensity (Erke et al., 2009). Our review of the extant literature suggests that there is no research evidence that indicates an optimal level of alcohol breath testing. That is, we suggest that no research exists to guide policy regarding whether or not there is a point at which alcohol related crashes reach a point of diminishing returns as a result of either saturated or targeted RBT testing. Aims: In this paper we first provide an examination of RBTs and alcohol related crashes across Australian jurisdictions. We then address the question of whether or not an optimal level of random breath testing exists by examining the relationship between the number of RBTs conducted and the occurrence of alcohol-related crashes over time, across all Australian states. Method: To examine the association between RBT rates and alcohol related crashes and to assess whether an optimal ratio of RBT tests per licenced drivers can be determined we draw on three administrative data sources form each jurisdiction. Where possible data collected spans January 1st 2000 to September 30th 2012. The RBT administrative dataset includes the number of Random Breath Tests (RBTs) conducted per month. The traffic crash administrative dataset contains aggregated monthly count of the number of traffic crashes where an individual’s recorded BAC reaches or exceeds 0.05g/ml of alcohol in blood. The licenced driver data were the monthly number of registered licenced drivers spanning January 2000 to December 2011. Results: The data highlights that the Australian story does not reflective of all States and territories. The stable RBT to licenced driver ratio in Queensland (of 1:1) suggests a stable rate of alcohol related crash data of 5.5 per 100,000 licenced drivers. Yet, in South Australia were a relative stable rate of RBT to licenced driver ratio of 1:2 is maintained the rate of alcohol related traffic crashes is substantially less at 3.7 per 100,000. We use joinpoint regression techniques and varying regression models to fit the data and compare the different patterns between jurisdictions. Discussion: The results of this study provide an updated review and evaluation of RBTs conducted in Australia and examines the association between RBTs and alcohol related traffic crashes. We also present an evidence base to guide policy decisions for RBT operations.
Resumo:
We have explored the potential of deep Raman spectroscopy, specifically surface enhanced spatially offset Raman spectroscopy (SESORS), for non-invasive detection from within animal tissue, by employing SERS-barcoded nanoparticle (NP) assemblies as the diagnostic agent. This concept has been experimentally verified in a clinic-relevant backscattered Raman system with an excitation line of 785 nm under ex vivo conditions. We have shown that our SORS system, with a fixed offset of 2-3 mm, offered sensitive probing of injected QTH-barcoded NP assemblies through animal tissue containing both protein and lipid. In comparison to that of non-aggregated SERS-barcoded gold NPs, we have demonstrated that the tailored SERS-barcoded aggregated NP assemblies have significantly higher detection sensitivity. We report that these NP assemblies can be readily detected at depths of 7-8 mm from within animal proteinaceous tissue with high signal-to-noise (S/N) ratio. In addition they could also be detected from beneath 1-2 mm of animal tissue with high lipid content, which generally poses a challenge due to high absorption of lipids in the near-infrared region. We have also shown that the signal intensity and S/N ratio at a particular depth is a function of the SERS tag concentration used and that our SORS system has a QTH detection limit of 10-6 M. Higher detection depths may possibly be obtained with optimization of the NP assemblies, along with improvements in the instrumentation. Such NP assemblies offer prospects for in vivo, non-invasive detection of tumours along with scope for incorporation of drugs and their targeted and controlled release at tumour sites. These diagnostic agents combined with drug delivery systems could serve as a “theranostic agent”, an integration of diagnostics and therapeutics into a single platform.
Resumo:
Currently, recommender systems (RS) have been widely applied in many commercial e-commerce sites to help users deal with the information overload problem. Recommender systems provide personalized recommendations to users and, thus, help in making good decisions about which product to buy from the vast amount of product choices. Many of the current recommender systems are developed for simple and frequently purchased products like books and videos, by using collaborative-filtering and content-based approaches. These approaches are not directly applicable for recommending infrequently purchased products such as cars and houses as it is difficult to collect a large number of ratings data from users for such products. Many of the ecommerce sites for infrequently purchased products are still using basic search-based techniques whereby the products that match with the attributes given in the target user’s query are retrieved and recommended. However, search-based recommenders cannot provide personalized recommendations. For different users, the recommendations will be the same if they provide the same query regardless of any difference in their interest. In this article, a simple user profiling approach is proposed to generate user’s preferences to product attributes (i.e., user profiles) based on user product click stream data. The user profiles can be used to find similarminded users (i.e., neighbours) accurately. Two recommendation approaches are proposed, namely Round- Robin fusion algorithm (CFRRobin) and Collaborative Filtering-based Aggregated Query algorithm (CFAgQuery), to generate personalized recommendations based on the user profiles. Instead of using the target user’s query to search for products as normal search based systems do, the CFRRobin technique uses the attributes of the products in which the target user’s neighbours have shown interest as queries to retrieve relevant products, and then recommends to the target user a list of products by merging and ranking the returned products using the Round Robin method. The CFAgQuery technique uses the attributes of the products that the user’s neighbours have shown interest in to derive an aggregated query, which is then used to retrieve products to recommend to the target user. Experiments conducted on a real e-commerce dataset show that both the proposed techniques CFRRobin and CFAgQuery perform better than the standard Collaborative Filtering and the Basic Search approaches, which are widely applied by the current e-commerce applications.
Resumo:
Policy makers increasingly recognise that an educated workforce with a high proportion of Science, Technology, Engineering and Mathematics (STEM) graduates is a pre-requisite to a knowledge-based, innovative economy. Over the past ten years, the proportion of first university degrees awarded in Australia in STEM fields is below the global average and continues to decrease from 22.2% in 2002 to 18.8% in 2010 [1]. These trends are mirrored by declines between 20% and 30% in the proportions of high school students enrolled in science or maths. These trends are not unique to Australia but their impact is of concern throughout the policy-making community. To redress these demographic trends, QUT embarked upon a long-term investment strategy to integrate education and research into the physical and virtual infrastructure of the campus, recognising that expectations of students change as rapidly as technology and learning practices change. To implement this strategy, physical infrastructure refurbishment/re-building is accompanied by upgraded technologies not only for learning but also for research. QUT’s vision for its city-based campuses is to create vibrant and attractive places to learn and research and to link strongly to the wider surrounding community. Over a five year period, physical infrastructure at the Gardens Point campus was substantially reconfigured in two key stages: (a) a >$50m refurbishment of heritage-listed buildings to encompass public, retail and social spaces, learning and teaching “test beds” and research laboratories and (b) destruction of five buildings to be replaced by a $230m, >40,000m2 Science and Engineering Centre designed to accommodate retail, recreation, services, education and research in an integrated, coordinated precinct. This landmark project is characterised by (i) self-evident, collaborative spaces for learning, research and social engagement, (ii) sustainable building practices and sustainable ongoing operation and; (iii) dynamic and mobile re-configuration of spaces or staffing to meet demand. Innovative spaces allow for transformative, cohort-driven learning and the collaborative use of space to prosecute joint class projects. Research laboratories are aggregated, centralised and “on display” to the public, students and staff. A major visualisation space – the largest multi-touch, multi-user facility constructed to date – is a centrepiece feature that focuses on demonstrating scientific and engineering principles or science oriented scenes at large scale (e.g. the Great Barrier Reef). Content on this visualisation facility is integrated with the regional school curricula and supports an in-house schools program for student and teacher engagement. Researchers are accommodated in a combined open-plan and office floor-space (80% open plan) to encourage interdisciplinary engagement and cross-fertilisation of skills, ideas and projects. This combination of spaces re-invigorates the on-campus experience, extends educational engagement across all ages and rapidly enhances research collaboration.
Resumo:
The Vehicle-to-Grid (V2G) concept is based on the newly developed and marketed technologies of hybrid petrol-electric vehicles, most notably represented by the Toyota Prius, in combination with significant structural changes to the world's energy economy, and the growing strain on electricity networks. The work described in this presentation focuses on the market and economic impacts of grid connected vehicles. We investigate price reduction effects and transmission system expansion cost reduction. We modelled a large numbers of plug-in-hybrid vehicle batteries by aggregating them into a virtual pumped-storage power station at the Australian national electricity market's (NEM) region level. The virtual power station concept models a centralised control for dispatching (operating) the aggregated electricity supply/demand capabilities of a large number of vehicles and their batteries. The actual level of output could be controlled by human or automated agents to either charge or discharge from/into the power grid. As previously mentioned the impacts of widespread deployments of this technology are likely to be economic, environmental and physical.
A methodology to develop an urban transport disadvantage framework : the case of Brisbane, Australia
Resumo:
Most individuals travel in order to participate in a network of activities which are important for attaining a good standard of living. Because such activities are commonly widely dispersed and not located locally, regular access to a vehicle is important to avoid exclusion. However, planning transport system provisions that can engage members of society in an acceptable degree of activity participation remains a great challenge. The main challenges in most cities of the world are due to significant population growth and rapid urbanisation which produces increased demand for transport. Keeping pace with these challenges in most urban areas is difficult due to the widening gap between supply and demand for transport systems which places the urban population at a transport disadvantage. The key element in mitigating the issue of urban transport disadvantage is to accurately identify the urban transport disadvantaged. Although wide-ranging variables and multi-dimensional methods have been used to identify this group, variables are commonly selected using ad-hoc techniques and unsound methods. This poses questions of whether the current variables used are accurately linked with urban transport disadvantage, and the effectiveness of the current policies. To fill these gaps, the research conducted for this thesis develops an operational urban transport disadvantage framework (UTDAF) based on key statistical urban transport disadvantage variables to accurately identify the urban transport disadvantaged. The thesis develops a methodology based on qualitative and quantitative statistical approaches to develop an urban transport disadvantage framework designed to accurately identify urban transport disadvantage. The reliability and the applicability of the methodology developed is the prime concern rather than the accuracy of the estimations. Relevant concepts that impact on urban transport disadvantage identification and measurement and a wide range of urban transport disadvantage variables were identified through a review of the existing literature. Based on the reviews, a conceptual urban transport disadvantage framework was developed based on the causal theory. Variables identified during the literature review were selected and consolidated based on the recommendations of international and local experts during the Delphi study. Following the literature review, the conceptual urban transport disadvantage framework was statistically assessed to identify key variables. Using the statistical outputs, the key variables were weighted and aggregated to form the UTDAF. Before the variable's weights were finalised, they were adjusted based on results of correlation analysis between elements forming the framework to improve the framework's accuracy. The UTDAF was then applied to three contextual conditions to determine the framework's effectiveness in identifying urban transport disadvantage. The development of the framework is likely to be a robust application measure for policy makers to justify infrastructure investments and to generate awareness about the issue of urban transport disadvantage.
Resumo:
Achieving sustainable urban development is identified as one ultimate goal of many contemporary planning endeavours and has become central to formulation of urban planning policies. Within this concept, land-use and transport integration is highlighted as one of the most important and attainable policy objectives. In many cities, integration is embraced as an integral part of local development plans, and a number of key integration principles are identified. However, the lack of available evaluation methods to measure extent of urban sustainability levels prevents successful implementation of these principles. This paper introduces a new indicator-based spatial composite indexing model developed to measure sustainability performance of urban settings by taking into account land-use and transport integration principles. Model indicators are chosen via a thorough selection process in line with key principles of land-use and transport integration. These indicators are grouped into categories and themes according to their topical relevance. These indicators are then aggregated to form a spatial composite index to portray an overview of the sustainability performance of the pilot study area used for model demonstration. The study results revealed that the model is a practical instrument for evaluating success of local integration policies and visualizing sustainability performance of built environments and useful in both identifying problematic areas as well as formulating policy interventions.
Resumo:
At the highest level of competitive sport, nearly all performances of athletes (both training and competitive) are chronicled using video. Video is then often viewed by expert coaches/analysts who then manually label important performance indicators to gauge performance. Stroke-rate and pacing are important performance measures in swimming, and these are previously digitised manually by a human. This is problematic as annotating large volumes of video can be costly, and time-consuming. Further, since it is difficult to accurately estimate the position of the swimmer at each frame, measures such as stroke rate are generally aggregated over an entire swimming lap. Vision-based techniques which can automatically, objectively and reliably track the swimmer and their location can potentially solve these issues and allow for large-scale analysis of a swimmer across many videos. However, the aquatic environment is challenging due to fluctuations in scene from splashes, reflections and because swimmers are frequently submerged at different points in a race. In this paper, we temporally segment races into distinct and sequential states, and propose a multimodal approach which employs individual detectors tuned to each race state. Our approach allows the swimmer to be located and tracked smoothly in each frame despite a diverse range of constraints. We test our approach on a video dataset compiled at the 2012 Australian Short Course Swimming Championships.