150 resultados para equivariant path fields
Resumo:
Purpose – The paper attempts to project the future trend of the gender wage gap in Great Britain up to 2031. Design/methodology/approach – The empirical analysis utilises the British Household Panel Study Wave F together with Office for National Statistics (ONS) demographic projections. The methodology combines the ONS projections with assumptions relating to the evolution of educational attainment in order to project the future distribution of human capital skills and consequently the future size of the gender wage gap. Findings – The analysis suggests that gender wage convergence will be slow, with little female progress by 2031 unless there is a large rise in returns to female experience. Originality/value – The paper has projected the pattern of male and female skill acquisition together with the associated trend in wages up to 2031.
Resumo:
The Australian e-Health Research Centre (AEHRC) recently participated in the ShARe/CLEF eHealth Evaluation Lab Task 1. The goal of this task is to individuate mentions of disorders in free-text electronic health records and map disorders to SNOMED CT concepts in the UMLS metathesaurus. This paper details our participation to this ShARe/CLEF task. Our approaches are based on using the clinical natural language processing tool Metamap and Conditional Random Fields (CRF) to individuate mentions of disorders and then to map those to SNOMED CT concepts. Empirical results obtained on the 2013 ShARe/CLEF task highlight that our instance of Metamap (after ltering irrelevant semantic types), although achieving a high level of precision, is only able to identify a small amount of disorders (about 21% to 28%) from free-text health records. On the other hand, the addition of the CRF models allows for a much higher recall (57% to 79%) of disorders from free-text, without sensible detriment in precision. When evaluating the accuracy of the mapping of disorders to SNOMED CT concepts in the UMLS, we observe that the mapping obtained by our ltered instance of Metamap delivers state-of-the-art e ectiveness if only spans individuated by our system are considered (`relaxed' accuracy).
Resumo:
Evidence suggests that both nascent and young firms (henceforth: “new firms”)—despite typically being small and resource-constrained—are sometimes able to innovate effectively. Such firms are seldom able to invest in lengthy and expensive development processes, which suggests that they may frequently rely instead on other pathways to generate innovativeness within the firm. In this paper, we develop and test arguments that “bricolage,” defined as making do by applying combinations of the resources at hand to new problems and opportunities, provides an important pathway to achieve innovation for new resource-constrained firms. Through bricolage, resource-constrained firms engage in the processes of “recombination” that are core to creating innovative outcomes. Based on a large longitudinal dataset, our results suggest that variations in the degree to which firms engage in bricolage behaviors can provide a broadly applicable explanation of innovativeness under resource constraints by new firms. We find no general support for our competing hypothesis that the positive effects may level off or even turn negative at high levels of bricolage..
Resumo:
Distributed Wireless Smart Camera (DWSC) network is a special type of Wireless Sensor Network (WSN) that processes captured images in a distributed manner. While image processing on DWSCs sees a great potential for growth, with its applications possessing a vast practical application domain such as security surveillance and health care, it suffers from tremendous constraints. In addition to the limitations of conventional WSNs, image processing on DWSCs requires more computational power, bandwidth and energy that presents significant challenges for large scale deployments. This dissertation has developed a number of algorithms that are highly scalable, portable, energy efficient and performance efficient, with considerations of practical constraints imposed by the hardware and the nature of WSN. More specifically, these algorithms tackle the problems of multi-object tracking and localisation in distributed wireless smart camera net- works and optimal camera configuration determination. Addressing the first problem of multi-object tracking and localisation requires solving a large array of sub-problems. The sub-problems that are discussed in this dissertation are calibration of internal parameters, multi-camera calibration for localisation and object handover for tracking. These topics have been covered extensively in computer vision literatures, however new algorithms must be invented to accommodate the various constraints introduced and required by the DWSC platform. A technique has been developed for the automatic calibration of low-cost cameras which are assumed to be restricted in their freedom of movement to either pan or tilt movements. Camera internal parameters, including focal length, principal point, lens distortion parameter and the angle and axis of rotation, can be recovered from a minimum set of two images of the camera, provided that the axis of rotation between the two images goes through the camera's optical centre and is parallel to either the vertical (panning) or horizontal (tilting) axis of the image. For object localisation, a novel approach has been developed for the calibration of a network of non-overlapping DWSCs in terms of their ground plane homographies, which can then be used for localising objects. In the proposed approach, a robot travels through the camera network while updating its position in a global coordinate frame, which it broadcasts to the cameras. The cameras use this, along with the image plane location of the robot, to compute a mapping from their image planes to the global coordinate frame. This is combined with an occupancy map generated by the robot during the mapping process to localised objects moving within the network. In addition, to deal with the problem of object handover between DWSCs of non-overlapping fields of view, a highly-scalable, distributed protocol has been designed. Cameras that follow the proposed protocol transmit object descriptions to a selected set of neighbours that are determined using a predictive forwarding strategy. The received descriptions are then matched at the subsequent camera on the object's path using a probability maximisation process with locally generated descriptions. The second problem of camera placement emerges naturally when these pervasive devices are put into real use. The locations, orientations, lens types etc. of the cameras must be chosen in a way that the utility of the network is maximised (e.g. maximum coverage) while user requirements are met. To deal with this, a statistical formulation of the problem of determining optimal camera configurations has been introduced and a Trans-Dimensional Simulated Annealing (TDSA) algorithm has been proposed to effectively solve the problem.
Resumo:
Abstract An experimental dataset representing a typical flow field in a stormwater gross pollutant trap (GPT) was visualised. A technique was developed to apply the image-based flow visualisation (IBFV) algorithm to the raw dataset. Particle image velocimetry (PIV) software was previously used to capture the flow field data by tracking neutrally buoyant particles with a high speed camera. The dataset consisted of scattered 2D point velocity vectors and the IBFV visualisation facilitates flow feature characterisation within the GPT. The flow features played a pivotal role in understanding stormwater pollutant capture and retention behaviour within the GPT. It was found that the IBFV animations revealed otherwise unnoticed flow features and experimental artefacts. For example, a circular tracer marker in the IBFV program visually highlighted streamlines to investigate the possible flow paths of pollutants entering the GPT. The investigated flow paths were compared with the behaviour of pollutants monitored during experiments.
Resumo:
This study presented a novel method for purification of three different grades of diatomite from China by scrubbing technique using sodiumhexametaphosphate (SHMP) as dispersant combinedwith centrifugation. Effects of pH value and dispersant amount on the grade of purified diatomitewere studied and the optimumexperimental conditions were obtained. The characterizations of original diatomite and derived products after purification were determined by scanning electron microscopy (SEM), X-ray diffraction (XRD), infrared spectroscopy (IR) and specific surface area analyzer (BET). The results indicated that the pore size distribution, impurity content and bulk density of purified diatomite were improved significantly. The dispersive effect of pH and SHMP on the separation of diatomite from clay minerals was discussed systematically through zeta potential test. Additionally, a possible purification mechanism was proposed in the light of the obtained experimental results.
Resumo:
The Smart Fields programme has been active in Shell over the last decade and has given large benefits. In order to understand the value and to underpin strategies for the future implementation programme, a study was carried out to quantify the benefits to date. This focused on actually achieved value, through increased production or lower costs. This provided an estimate of the total value achieved to date. Future benefits such as increased reserves or continued production gain were recorded separately. The paper describes the process followed in the benefits quantification. It identifies the key solutions and technologies and describes the mechanism used to understand the relation between solutions and value. Examples have been given of value from various assets around the world, in both existing fields and in green fields. Finally, the study provided the methodology for tracking of value. This helps Shell to estimate and track the benefits of the Smart Fields programme at company scale.
Resumo:
Much of the existing empirical research on journalism focuses largely on hard-news journalism, at the expense of its less traditional forms, particularly the soft-news areas of lifestyle and entertainment journalism. In focussing on one particular area of lifestyle journalism – the reporting of travel stories – this paper argues for renewed scholarly efforts in this increasingly important field. Travel journalism’s location at the intersection between information and entertainment, journalism and advertising, as well as its increasingly significant role in the representation of foreign cultures makes it a significant site for scholarly research. By reviewing existing research about travel journalism and examining in detail the special exigencies that constrain it, the article proposes a number of dimensions for future research into the production practices of travel journalism. These dimensions include travel journalism’s role in mediating foreign cultures, its market orientation, motivational aspects and its ethical standards.
Resumo:
This paper describes a novel optimum path planning strategy for long duration AUV operations in environments with time-varying ocean currents. These currents can exceed the maximum achievable speed of the AUV, as well as temporally expose obstacles. In contrast to most other path planning strategies, paths have to be defined in time as well as space. The solution described here exploits ocean currents to achieve mission goals with minimal energy expenditure, or a tradeoff between mission time and required energy. The proposed algorithm uses a parallel swarm search as a means to reduce the susceptibility to large local minima on the complex cost surface. The performance of the optimisation algorithms is evaluated in simulation and experimentally with the Starbug AUV using a validated ocean model of Brisbane’s Moreton Bay.
Resumo:
Increasingly the fields of Human Computer Interaction (HCI) and art are intersecting. Interactive artworks are being evaluated by HCI methods and artworks are being created that employ and repurpose technology for interactive environments. In this paper we steer a path between empirical and critical–theoretical traditions, and discuss HCI research and art works that also span this divide. We address concerns about ‘new’ ethnography raised by Crabtree et al. (2009) in “Ethnography Considered Harmful”, a critical essay that positions ethnographic and critical-theoretical views at odds with each other. We propose a mediated view for understanding interactions within open-ended interactive artworks that values both perspectives as we navigate boundaries between art practice and HCI.
Resumo:
A, dry, non-hydrostatic sub-cloud model is used to simulate an isolated stationary downburst wind event to study the influence topographic features have on the near-ground wind structure of these storms. It was generally found that storm maximum wind speeds could be increased by up to 30% because of the presence of a topographic feature at the location of maximum wind speeds. Comparing predicted velocity profile amplification with that of a steady flow impinging jet, similar results were found despite the simplifications made in the impinging jet model. Comparison of these amplification profiles with those found in the simulated boundary layer winds reveal reductions of up to 30% in the downburst cases. Downburst and boundary layer amplification profiles were shown to become more similar as the topographic feature height was reduced with respect to the outflow depth.
Resumo:
Convective downburst wind storms generate the peak annual gust wind speed for many parts of the non-cyclonic world at return periods of importance for ultimate limit state design. Despite this there is little clear understanding of how to appropriately design for these wind events given their significant dissimilarities to boundary layer winds upon which most design is based. To enhance the understanding of wind fields associated with these storms a three-dimensional numerical model was developed to simulate a multitude of idealised downburst scenarios and to investigate their near-ground wind characteristics. Stationary and translating downdraft wind events in still and sheared environments were simulated with baseline results showing good agreement with previous numerical work and full-scale observational data. Significant differences are shown in the normalised peak wind speed velocity profiles depending on the environmental wind conditions in the vicinity of the simulated event. When integrated over the height of mid- to high rise structures, all simulated profiles are shown to produce wind loads smaller than an equivalent 10 m height matched open terrain boundary layer profile. This suggests that for these structures the current design approach is conservative from an ultimate loading standpoint. Investigating the influence of topography on the structure of the simulated near-ground downburst wind fields, it is shown that these features amplify wind speeds in a manner similar to that expected for boundary layer winds, but the extent of amplification is reduced. The level of reduction is shown to be dependent on the depth of the simulated downburst outflow.
Resumo:
Introduction Total scatter factor (or output factor) in megavoltage photon dosimetry is a measure of relative dose relating a certain field size to a reference field size. The use of solid phantoms has been well established for output factor measurements, however to date these phantoms have not been tested with small fields. In this work, we evaluate the water equivalency of a number of solid phantoms for small field output factor measurements using the EGSnrc Monte Carlo code. Methods The following small square field sizes were simulated using BEAMnrc: 5, 6, 7, 8, 10 and 30 mm. Each simulated phantom geometry was created in DOSXYZnrc and consisted of a silicon diode (of length and width 1.5 mm and depth 0.5 mm) submersed in the phantom at a depth of 5 g/cm2. The source-to-detector distance was 100 cm for all simulations. The dose was scored in a single voxel at the location of the diode. Interaction probabilities and radiation transport parameters for each material were created using custom PEGS4 files. Results A comparison of the resultant output factors in the solid phantoms, compared to the same factors in a water phantom are shown in Fig. 1. The statistical uncertainty in each point was less than or equal to 0.4 %. The results in Fig. 1 show that the density of the phantoms affected the output factor results, with higher density materials (such as PMMA) resulting in higher output factors. Additionally, it was also calculated that scaling the depth for equivalent path length had negligible effect on the output factor results at these field sizes. Discussion and conclusions Electron stopping power and photon mass energy absorption change minimally with small field size [1]. Also, it can be seen from Fig. 1 that the difference from water decreases with increasing field size. Therefore, the most likely cause for the observed discrepancies in output factors is differing electron disequilibrium as a function of phantom density. When measuring small field output factors in a solid phantom, it is important that the density is very close to that of water.
Resumo:
A multimodal trip planner that produces optimal journeys involving both public transport and private vehicle legs has to solve a number of shortest path problems, both on the road network and the public transport network. The algorithms that are used to solve these shortest path problems have been researched since the late 1950s. However, in order to provide accurate journey plans that can be trusted by the user, the variability of travel times caused by traffic congestion must be taken into consideration. This requires the use of more sophisticated time-dependent shortest path algorithms, which have only been researched in depth over the last two decades, from the mid-1990s. This paper will review and compare nine algorithms that have been proposed in the literature, discussing the advantages and disadvantages of each algorithm on the basis of five important criteria that must be considered when choosing one or more of them to implement in a multimodal trip planner.
Resumo:
Path integration is a process in which observers derive their location by integrating self-motion signals along their locomotion trajectory. Although the medial temporal lobe (MTL) is thought to take part in path integration, the scope of its role for path integration remains unclear. To address this issue, we administered a variety of tasks involving path integration and other related processes to a group of neurosurgical patients whose MTL was unilaterally resected as therapy for epilepsy. These patients were unimpaired relative to neurologically intact controls in many tasks that required integration of various kinds of sensory self-motion information. However, the same patients (especially those who had lesions in the right hemisphere) walked farther than the controls when attempting to walk without vision to a previewed target. Importantly, this task was unique in our test battery in that it allowed participants to form a mental representation of the target location and anticipate their upcoming walking trajectory before they began moving. Thus, these results put forth a new idea that the role of MTL structures for human path integration may stem from their participation in predicting the consequences of one's locomotor actions. The strengths of this new theoretical viewpoint are discussed.