924 resultados para Precise positioning


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Real‐time kinematic (RTK) GPS techniques have been extensively developed for applications including surveying, structural monitoring, and machine automation. Limitations of the existing RTK techniques that hinder their applications for geodynamics purposes are twofold: (1) the achievable RTK accuracy is on the level of a few centimeters and the uncertainty of vertical component is 1.5–2 times worse than those of horizontal components and (2) the RTK position uncertainty grows in proportional to the base‐torover distances. The key limiting factor behind the problems is the significant effect of residual tropospheric errors on the positioning solutions, especially on the highly correlated height component. This paper develops the geometry‐specified troposphere decorrelation strategy to achieve the subcentimeter kinematic positioning accuracy in all three components. The key is to set up a relative zenith tropospheric delay (RZTD) parameter to absorb the residual tropospheric effects and to solve the established model as an ill‐posed problem using the regularization method. In order to compute a reasonable regularization parameter to obtain an optimal regularized solution, the covariance matrix of positional parameters estimated without the RZTD parameter, which is characterized by observation geometry, is used to replace the quadratic matrix of their “true” values. As a result, the regularization parameter is adaptively computed with variation of observation geometry. The experiment results show that new method can efficiently alleviate the model’s ill condition and stabilize the solution from a single data epoch. Compared to the results from the conventional least squares method, the new method can improve the longrange RTK solution precision from several centimeters to the subcentimeter in all components. More significantly, the precision of the height component is even higher. Several geosciences applications that require subcentimeter real‐time solutions can largely benefit from the proposed approach, such as monitoring of earthquakes and large dams in real‐time, high‐precision GPS leveling and refinement of the vertical datum. In addition, the high‐resolution RZTD solutions can contribute to effective recovery of tropospheric slant path delays in order to establish a 4‐D troposphere tomography.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This chapter considers the complex literate repertoires of 21st century children in multicultural primary classrooms in Adelaide South Australia. It draws on the curricular and pedagogical work of two experienced primary school teachers who explore culture, race and class, by positioning children as textual producers across a variety of media. In particular we discuss two child-authored texts – A is for Arndale – a local alphabet book co-authored by children aged between eight and ten, and – Cooking Afghani Style - a magazine style film produced by a multi-aged class of children (aged eight to thirteen) recently arrived in Australia. In the process of making these texts, primary children engaged in reading as a cultural practice – re-reading and re-writing their neighbourhoods and identities (both individual and collective). This involved frequent excursions to local key sites, both familiar and unfamiliar to the children. They investigated how diverse children experienced and lived their lives in particular places within changing communities.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Automation technology can provide construction firms with a number of competitive advantages. Technology strategy guides a firm's approach to all technology, including automation. Engineering management educators, researchers, and construction industry professionals need improved understanding of how technology affects results, and how to better target investments to improve competitive performance. A more formal approach to the concept of technology strategy can benefit the construction manager in his efforts to remain competitive in increasingly hostile markets. This paper recommends consideration of five specific dimensions of technology strategy within the overall parameters of market conditions, firm capabilities and goals, and stage of technology evolution. Examples of the application of this framework in the formulation of technology strategy are provided for CAD applications, co-ordinated positioning technology and advanced falsework and formwork mechanisation to support construction field operations. Results from this continuing line of research can assist managers in making complex and difficult decisions regarding reengineering construction processes in using new construction technology and benefit future researchers by providing new tools for analysis. Through managing technology to best suit the existing capabilities of their firm, and addressing the market forces, engineering managers can better face the increasingly competitive environment in which they operate.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Pragmatic construction professionals, accustomed to intense price competition and focused on the bottom line, have difficulty justifying investments in advanced technology. Researchers and industry professionals need improved tools to analyze how technology affects the performance of the firm. This paper reports the results of research to begin answering the question, “does technology matter?” The researchers developed a set of five dimensions for technology strategy, collected information regarding these dimensions along with four measures of competitive performance in five bridge construction firms, and analyzed the information to identify relationships between technology strategy and competitive performance. Three technology strategy dimensions—competitive positioning, depth of technology strategy, and organizational fit—showed particularly strong correlations with the competitive performance indicators of absolute growth in contract awards and contract award value per technical employee. These findings indicate that technology does matter. The research also provides ways to analyze options for approaching technology and ways to relate technology to competitive performance for use by managers. It also provides a valuable set of research measures for technology strategy.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

During secondary fracture healing, various tissue types including new bone are formed. The local mechanical strains play an important role in tissue proliferation and differentiation. To further our mechanobiological understanding of fracture healing, a precise assessment of local strains is mandatory. Until now, static analyses using Finite Elements (FE) have assumed homogenous material properties. With the recent quantification of both the spatial tissue patterns (Vetter et al., 2010) and the development of elastic modulus of newly formed bone during healing (Manjubala et al., 2009), it is now possible to incorporate this heterogeneity. Therefore, the aim of this study is to investigate the effect of this heterogeneity on the strain patterns at six successive healing stages. The input data of the present work stemmed from a comprehensive cross-sectional study of sheep with a tibial osteotomy (Epari et al., 2006). In our FE model, each element containing bone was described by a bulk elastic modulus, which depended on both the local area fraction and the local elastic modulus of the bone material. The obtained strains were compared with the results of hypothetical FE models assuming homogeneous material properties. The differences in the spatial distributions of the strains between the heterogeneous and homogeneous FE models were interpreted using a current mechanobiological theory (Isakson et al., 2006). This interpretation showed that considering the heterogeneity of the hard callus is most important at the intermediate stages of healing, when cartilage transforms to bone via endochondral ossification.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

QUT Library’s model of learning support brings together academic literacy (study skills) and information literacy (research skills). The blended portfolio enables holistic planning and development, seamless services, connected learning resources and more authentic curriculum-embedded education. The model reinforces the Library’s strategic focus on learning service innovation and active engagement in teaching and learning. ----- ----- ----- The online learning strategy is a critical component of the broader literacies framework. This strategy unifies new and existing online resources (e.g.: Pilot, QUT cite|write and IFN001|AIRS Online) to augment learner capability. Across the suite, prudent application of emerging technologies with visual communications and learning design delivers a wide range of adaptive study tools. Separately and together, these resources meet the learning needs and styles of a diverse cohort providing positive and individual learning opportunities. Deliberate articulation with strategic directions regarding First Year Experience, assessment, retention and curriculum alignment assures that the Library’s initiatives move in step with institutional objectives relating to enhancing the student experience and flexible blended learning. ----- ----- ----- The release of Studywell in 2010 emphasises the continuing commitment to blended literacy education. Targeting undergraduate learners (particularly 1st year/transition), this online environment provides 24/7 access to practical study and research tools. Studywell’s design and application of technology creates a “discovery infrastructure” [1] which facilitates greater self-directed learning and interaction with content. ----- ----- ----- This paper presents QUT Library’s online learning strategy within the context of the parent “integrated literacies” framework. Highlighting the key online learning resources, the paper describes the inter-relationships between those resources to develop complementary literacies. The paper details broad aspects of the overarching learning and study support framework as well as the online strategy, including strategic positioning, quality and evaluation processes, maintenance, development, implementation, and client engagement and satisfaction with the learning resources.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Earlier research developed theoretically-based aggregate metrics for technology strategy and used them to analyze California bridge construction firms (Hampson, 1993). Determinants of firm performance, including trend in contract awards, market share and contract awards per employee, were used as indicators for competitive performance. The results of this research were a series of refined theoretically-based measures for technology strategy and a demonstrated positive relationship between technology strategy and competitive performance within the bridge construction sector. This research showed that three technology strategy dimensions—competitive positioning, depth of technology strategy, and organizational fit— show very strong correlation with the competitive performance indicators of absolute growth in contract awards, and contract awards per employee. Both researchers and industry professionals need improved understanding of how technology affects results, and how to better target investments to improve competitive performance in particular industry sectors. This paper builds on the previous research findings by evaluating the strategic fit of firms' approach to technology with industry segment characteristics. It begins with a brief overview of the background regarding technology strategy. The major sections of the paper describe niches and firms in an example infrastructure construction market, analyze appropriate technology strategies, and describe managerial actions to implement these strategies and support the business objectives of the firm.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

"How do you film a punch?" This question can be posed by actors, make-up artists, directors and cameramen. Though they can all ask the same question, they are not all seeking the same answer. Within a given domain, based on the roles they play, agents of the domain have different perspectives and they want the answers to their question from their perspective. In this example, an actor wants to know how to act when filming a scene involving a punch. A make-up artist is interested in how to do the make-up of the actor to show bruises that may result from the punch. Likewise, a director wants to know how to direct such a scene and a cameraman is seeking guidance on how best to film such a scene. This role-based difference in perspective is the underpinning of the Loculus framework for information management for the Motion Picture Industry. The Loculus framework exploits the perspective of agent for information extraction and classification within a given domain. The framework uses the positioning of the agent’s role within the domain ontology and its relatedness to other concepts in the ontology to determine the perspective of the agent. Domain ontology had to be developed for the motion picture industry as the domain lacked one. A rule-based relatedness score was developed to calculate the relative relatedness of concepts with the ontology, which were then used in the Loculus system for information exploitation and classification. The evaluation undertaken to date have yielded promising results and have indicated that exploiting perspective can lead to novel methods of information extraction and classifications.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper addresses the tradeoff between energy consumption and localization performance in a mobile sensor network application. The focus is on augmenting GPS location with more energy-efficient location sensors to bound position estimate uncertainty in order to prolong node lifetime. We use empirical GPS and radio contact data from a largescale animal tracking deployment to model node mobility, GPS and radio performance. These models are used to explore duty cycling strategies for maintaining position uncertainty within specified bounds. We then explore the benefits of using short-range radio contact logging alongside GPS as an energy-inexpensive means of lowering uncertainty while the GPS is off, and we propose a versatile contact logging strategy that relies on RSSI ranging and GPS lock back-offs for reducing the node energy consumption relative to GPS duty cycling. Results show that our strategy can cut the node energy consumption by half while meeting application specific positioning criteria.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Increasing awareness of the benefits of stimulating entrepreneurial behaviour in small and medium enterprises has fostered strong interest in innovation programs. Recently many western countries have invested in design innovation for better firm performance. This research presents some early findings from a study of companies which participated in an holistic approach to design innovation, where the outcomes include better business performance and better market positioning in global markets. Preliminary findings from in-depth semi-structured interviews indicate the importance of firm openness to new ways of working and developing new processes of strategic entrepreneurship. Implications for theory and practice are discussed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The tear film plays an important role preserving the health of the ocular surface and maintaining the optimal refractive power of the cornea. Moreover dry eye syndrome is one of the most commonly reported eye health problems. This syndrome is caused by abnormalities in the properties of the tear film. Current clinical tools to assess the tear film properties have shown certain limitations. The traditional invasive methods for the assessment of tear film quality, which are used by most clinicians, have been criticized for the lack of reliability and/or repeatability. A range of non-invasive methods of tear assessment have been investigated, but also present limitations. Hence no “gold standard” test is currently available to assess the tear film integrity. Therefore, improving techniques for the assessment of the tear film quality is of clinical significance and the main motivation for the work described in this thesis. In this study the tear film surface quality (TFSQ) changes were investigated by means of high-speed videokeratoscopy (HSV). In this technique, a set of concentric rings formed in an illuminated cone or a bowl is projected on the anterior cornea and their reflection from the ocular surface imaged on a charge-coupled device (CCD). The reflection of the light is produced in the outer most layer of the cornea, the tear film. Hence, when the tear film is smooth the reflected image presents a well structure pattern. In contrast, when the tear film surface presents irregularities, the pattern also becomes irregular due to the light scatter and deviation of the reflected light. The videokeratoscope provides an estimate of the corneal topography associated with each Placido disk image. Topographical estimates, which have been used in the past to quantify tear film changes, may not always be suitable for the evaluation of all the dynamic phases of the tear film. However the Placido disk image itself, which contains the reflected pattern, may be more appropriate to assess the tear film dynamics. A set of novel routines have been purposely developed to quantify the changes of the reflected pattern and to extract a time series estimate of the TFSQ from the video recording. The routine extracts from each frame of the video recording a maximized area of analysis. In this area a metric of the TFSQ is calculated. Initially two metrics based on the Gabor filter and Gaussian gradient-based techniques, were used to quantify the consistency of the pattern’s local orientation as a metric of TFSQ. These metrics have helped to demonstrate the applicability of HSV to assess the tear film, and the influence of contact lens wear on TFSQ. The results suggest that the dynamic-area analysis method of HSV was able to distinguish and quantify the subtle, but systematic degradation of tear film surface quality in the inter-blink interval in contact lens wear. It was also able to clearly show a difference between bare eye and contact lens wearing conditions. Thus, the HSV method appears to be a useful technique for quantitatively investigating the effects of contact lens wear on the TFSQ. Subsequently a larger clinical study was conducted to perform a comparison between HSV and two other non-invasive techniques, lateral shearing interferometry (LSI) and dynamic wavefront sensing (DWS). Of these non-invasive techniques, the HSV appeared to be the most precise method for measuring TFSQ, by virtue of its lower coefficient of variation. While the LSI appears to be the most sensitive method for analyzing the tear build-up time (TBUT). The capability of each of the non-invasive methods to discriminate dry eye from normal subjects was also investigated. The receiver operating characteristic (ROC) curves were calculated to assess the ability of each method to predict dry eye syndrome. The LSI technique gave the best results under both natural blinking conditions and in suppressed blinking conditions, which was closely followed by HSV. The DWS did not perform as well as LSI or HSV. The main limitation of the HSV technique, which was identified during the former clinical study, was the lack of the sensitivity to quantify the build-up/formation phase of the tear film cycle. For that reason an extra metric based on image transformation and block processing was proposed. In this metric, the area of analysis was transformed from Cartesian to Polar coordinates, converting the concentric circles pattern into a quasi-straight lines image in which a block statistics value was extracted. This metric has shown better sensitivity under low pattern disturbance as well as has improved the performance of the ROC curves. Additionally a theoretical study, based on ray-tracing techniques and topographical models of the tear film, was proposed to fully comprehend the HSV measurement and the instrument’s potential limitations. Of special interested was the assessment of the instrument’s sensitivity under subtle topographic changes. The theoretical simulations have helped to provide some understanding on the tear film dynamics, for instance the model extracted for the build-up phase has helped to provide some insight into the dynamics during this initial phase. Finally some aspects of the mathematical modeling of TFSQ time series have been reported in this thesis. Over the years, different functions have been used to model the time series as well as to extract the key clinical parameters (i.e., timing). Unfortunately those techniques to model the tear film time series do not simultaneously consider the underlying physiological mechanism and the parameter extraction methods. A set of guidelines are proposed to meet both criteria. Special attention was given to a commonly used fit, the polynomial function, and considerations to select the appropriate model order to ensure the true derivative of the signal is accurately represented. The work described in this thesis has shown the potential of using high-speed videokeratoscopy to assess tear film surface quality. A set of novel image and signal processing techniques have been proposed to quantify different aspects of the tear film assessment, analysis and modeling. The dynamic-area HSV has shown good performance in a broad range of conditions (i.e., contact lens, normal and dry eye subjects). As a result, this technique could be a useful clinical tool to assess tear film surface quality in the future.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Analyzing security protocols is an ongoing research in the last years. Different types of tools are developed to make the analysis process more precise, fast and easy. These tools consider security protocols as black boxes that can not easily be composed. It is difficult or impossible to do a low-level analysis or combine different tools with each other using these tools. This research uses Coloured Petri Nets (CPN) to analyze OSAP trusted computing protocol. The OSAP protocol is modeled in different levels and it is analyzed using state space method. The produced model can be combined with other trusted computing protocols in future works.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Suburbanisation has been internationally a major phenomenon in the last decades. Suburb-to-suburb routes are nowadays the most widespread road journeys; and this resulted in an increment of distances travelled, particularly on faster suburban highways. The design of highways tends to over-simplify the driving task and this can result in decreased alertness. Driving behaviour is consequently impaired and drivers are then more likely to be involved in road crashes. This is particularly dangerous on highways where the speed limit is high. While effective countermeasures to this decrement in alertness do not currently exist, the development of in-vehicle sensors opens avenues for monitoring driving behaviour in real-time. The aim of this study is to evaluate in real-time the level of alertness of the driver through surrogate measures that can be collected from in-vehicle sensors. Slow EEG activity is used as a reference to evaluate driver's alertness. Data are collected in a driving simulator instrumented with an eye tracking system, a heart rate monitor and an electrodermal activity device (N=25 participants). Four different types of highways (driving scenario of 40 minutes each) are implemented through the variation of the road design (amount of curves and hills) and the roadside environment (amount of buildings and traffic). We show with Neural Networks that reduced alertness can be detected in real-time with an accuracy of 92% using lane positioning, steering wheel movement, head rotation, blink frequency, heart rate variability and skin conductance level. Such results show that it is possible to assess driver's alertness with surrogate measures. Such methodology could be used to warn drivers of their alertness level through the development of an in-vehicle device monitoring in real-time drivers' behaviour on highways, and therefore it could result in improved road safety.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Drivers' ability to react to unpredictable events deteriorates when exposed to highly predictable and uneventful driving tasks. Highway design reduces the driving task mainly to a lane-keeping manoeuvre. Such a task is monotonous, providing little stimulation and this contributes to crashes due to inattention. Research has shown that driver's hypovigilance can be assessed with EEG measurements and that driving performance is impaired during prolonged monotonous driving tasks. This paper aims to show that two dimensions of monotony - namely road design and road side variability - decrease vigilance and impair driving performance. This is the first study correlating hypovigilance and driver performance in varied monotonous conditions, particularly on a short time scale (a few seconds). We induced vigilance decrement as assessed with an EEG during a monotonous driving simulator experiment. Road monotony was varied through both road design and road side variability. The driver's decrease in vigilance occurred due to both road design and road scenery monotony and almost independently of the driver's sensation seeking level. Such impairment was also correlated to observable measurements from the driver, the car and the environment. During periods of hypovigilance, the driving performance impairment affected lane positioning, time to lane crossing, blink frequency, heart rate variability and non-specific electrodermal response rates. This work lays the foundation for the development of an in-vehicle device preventing hypovigilance crashes on monotonous roads.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the analysis of medical images for computer-aided diagnosis and therapy, segmentation is often required as a preliminary step. Medical image segmentation is a complex and challenging task due to the complex nature of the images. The brain has a particularly complicated structure and its precise segmentation is very important for detecting tumors, edema, and necrotic tissues in order to prescribe appropriate therapy. Magnetic Resonance Imaging is an important diagnostic imaging technique utilized for early detection of abnormal changes in tissues and organs. It possesses good contrast resolution for different tissues and is, thus, preferred over Computerized Tomography for brain study. Therefore, the majority of research in medical image segmentation concerns MR images. As the core juncture of this research a set of MR images have been segmented using standard image segmentation techniques to isolate a brain tumor from the other regions of the brain. Subsequently the resultant images from the different segmentation techniques were compared with each other and analyzed by professional radiologists to find the segmentation technique which is the most accurate. Experimental results show that the Otsu’s thresholding method is the most suitable image segmentation method to segment a brain tumor from a Magnetic Resonance Image.