969 resultados para level set method


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The purpose of this paper is to advance our understanding of what contextual factors influence the service bundling process in an organizational setting. Although previous literature contains insights into the mechanisms underlying bundling and the artefacts for performing the bundling task itself, the body of knowledge seems to lack a comprehensive framework for analysing the actual scenario in which the bundling process is performed. This is required as the scenario will influence the bundling method and the IT support. We address this need by designing a morphological box for analysing bundling scenarios in different organizational settings. The factors featured in the box are systematised into a set of four categories of bundling layers which we identify from reviewing literature. The two core layers in the framework are the service bundling on a type level and on an instance level (i.e. configuration). To demonstrate the applicability and utility of the proposed morphological box, we apply it to assess the underlying differences and commonalities of two different bundling scenarios from the B2B and G2C sectors which stress the differences between bundling on a type and instance level. In addition, we identify several prospects for future research that can benefit from the proposed morphological box.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper discusses human factors issues of low cost railway level crossings in Australia. Several issues are discussed in this paper including safety at passive level railway crossings, human factors considerations associated with unavailability of a warning device, and a conceptual model for how safety could be compromised at railway level crossings following prolonged or frequent unavailability. The research plans to quantify safety risk to motorists at level crossings using a Human Reliability Assessment (HRA) method, supported by data collected using an advanced driving simulator. This method aims to identify human error within tasks and task units identified as part of the task analysis process. It is anticipated that by modelling driver behaviour the current study will be able to quantify meaningful task variability including temporal parameters, between participants and within participants. The process of complex tasks such as driving through a level crossing is fundamentally context-bound. Therefore this study also aims to quantify those performance-shaping factors that contribute to vehicle train collisions by highlighting changes in the task units and driver physiology. Finally we will also consider a number of variables germane to ensuring external validity of our results. Without this inclusion, such an analysis could seriously underestimate the probabilistic risk assessment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper discusses human factors issues of low cost railway level crossings in Australia. Several issues are discussed in this paper including safety at passive level railway crossings, human factors considerations associated with unavailability of a warning device, and a conceptual model for how safety could be compromised at railway level crossings following prolonged or frequent unavailability. The research plans to quantify safety risk to motorists at level crossings using a Human Reliability Assessment (HRA) method, supported by data collected using an advanced driving simulator. This method aims to identify human error within tasks and task units identified as part of the task analysis process. It is anticipated that by modelling driver behaviour the current study will be able to quantify meaningful task variability including temporal parameters, between participants and within participants. The process of complex tasks such as driving through a level crossing is fundamentally context-bound. Therefore this study also aims to quantify those performance-shaping factors that contribute to vehicle train collisions by highlighting changes in the task units and driver physiology. Finally we will also consider a number of variables germane to ensuring external validity of our results. Without this inclusion, such an analysis could seriously underestimate risk.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Pedestrians’ use of mp3 players or mobile phones can pose the risk of being hit by motor vehicles. We present an approach for detecting a crash risk level using the computing power and the microphone of mobile devices that can be used to alert the user in advance of an approaching vehicle so as to avoid a crash. A single feature extractor classifier is not usually able to deal with the diversity of risky acoustic scenarios. In this paper, we address the problem of detection of vehicles approaching a pedestrian by a novel, simple, non resource intensive acoustic method. The method uses a set of existing statistical tools to mine signal features. Audio features are adaptively thresholded for relevance and classified with a three component heuristic. The resulting Acoustic Hazard Detection (AHD) system has a very low false positive detection rate. The results of this study could help mobile device manufacturers to embed the presented features into future potable devices and contribute to road safety.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Collaborative question answering (cQA) portals such as Yahoo! Answers allow users as askers or answer authors to communicate, and exchange information through the asking and answering of questions in the network. In their current set-up, answers to a question are arranged in chronological order. For effective information retrieval, it will be advantageous to have the users’ answers ranked according to their quality. This paper proposes a novel approach of evaluating and ranking the users’answers and recommending the top-n quality answers to information seekers. The proposed approach is based on a user-reputation method which assigns a score to an answer reflecting its answer author’s reputation level in the network. The proposed approach is evaluated on a dataset collected from a live cQA, namely, Yahoo! Answers. To compare the results obtained by the non-content-based user-reputation method, experiments were also conducted with several content-based methods that assign a score to an answer reflecting its content quality. Various combinations of non-content and content-based scores were also used in comparing results. Empirical analysis shows that the proposed method is able to rank the users’ answers and recommend the top-n answers with good accuracy. Results of the proposed method outperform the content-based methods, various combinations, and the results obtained by the popular link analysis method, HITS.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a method for automatic terrain classification, using a cheap monocular camera in conjunction with a robot’s stall sensor. A first step is to have the robot generate a training set of labelled images. Several techniques are then evaluated for preprocessing the images, reducing their dimensionality, and building a classifier. Finally, the classifier is implemented and used online by an indoor robot. Results are presented, demonstrating an increased level of autonomy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Texas Department of Transportation (TxDOT) is concerned about the widening gap between preservation needs and available funding. Funding levels are not adequate to meet the preservation needs of the roadway network; therefore projects listed in the 4-Year Pavement Management Plan must be ranked to determine which projects should be funded now and which can be postponed until a later year. Currently, each district uses locally developed methods to prioritize projects. These ranking methods have relied on less formal qualitative assessments based on engineers’ subjective judgment. It is important for TxDOT to have a 4-Year Pavement Management Plan that uses a transparent, rational project ranking process. The objective of this study is to develop a conceptual framework that describes the development of the 4-Year Pavement Management Plan. It can be largely divided into three Steps; 1) Network-Level project screening process, 2) Project-Level project ranking process, and 3) Economic Analysis. A rational pavement management procedure and a project ranking method accepted by districts and the TxDOT administration will maximize efficiency in budget allocations and will potentially help improve pavement condition. As a part of the implementation of the 4-Year Pavement Management Plan, the Network-Level Project Screening (NLPS) tool including the candidate project identification algorithm and the preliminary project ranking matrix was developed. The NLPS has been used by the Austin District Pavement Engineer (DPE) to evaluate PMIS (Pavement Management Information System) data and to prepare a preliminary list of candidate projects for further evaluation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The accuracy of marker placement on palpable surface anatomical landmarks is an important consideration in biomechanics. Although marker placement reliability has been studied in some depth, it remains unclear whether or not the markers are accurately positioned over the intended landmark in order to define the static position and orientation of the segment. A novel method using commonly available X-ray imaging was developed to identify the accuracy of markers placed on the shoe surface by palpating landmarks through the shoe. An anterior–posterior and lateral–medial X-ray was taken on 24 participants with a newly developed marker set applied to both the skin and shoe. The vector magnitude of both skin- and shoe-mounted markers from the anatomical landmark was calculated, as well as the mean marker offset between skin- and shoe-mounted markers. The accuracy of placing markers on the shoe relative to the skin-mounted markers, accounting for shoe thickness, was less than 5mm for all markers studied. Further, when using the developed guidelines provided in this study, the method was deemed reliable (Intra-rater ICCs¼0.50–0.92). In conclusion, the method proposed here can reliably assess marker placement accuracy on the shoe surface relative to chosen anatomical landmarks beneath the skin.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis develops a detailed conceptual design method and a system software architecture defined with a parametric and generative evolutionary design system to support an integrated interdisciplinary building design approach. The research recognises the need to shift design efforts toward the earliest phases of the design process to support crucial design decisions that have a substantial cost implication on the overall project budget. The overall motivation of the research is to improve the quality of designs produced at the author's employer, the General Directorate of Major Works (GDMW) of the Saudi Arabian Armed Forces. GDMW produces many buildings that have standard requirements, across a wide range of environmental and social circumstances. A rapid means of customising designs for local circumstances would have significant benefits. The research considers the use of evolutionary genetic algorithms in the design process and the ability to generate and assess a wider range of potential design solutions than a human could manage. This wider ranging assessment, during the early stages of the design process, means that the generated solutions will be more appropriate for the defined design problem. The research work proposes a design method and system that promotes a collaborative relationship between human creativity and the computer capability. The tectonic design approach is adopted as a process oriented design that values the process of design as much as the product. The aim is to connect the evolutionary systems to performance assessment applications, which are used as prioritised fitness functions. This will produce design solutions that respond to their environmental and function requirements. This integrated, interdisciplinary approach to design will produce solutions through a design process that considers and balances the requirements of all aspects of the design. Since this thesis covers a wide area of research material, 'methodological pluralism' approach was used, incorporating both prescriptive and descriptive research methods. Multiple models of research were combined and the overall research was undertaken following three main stages, conceptualisation, developmental and evaluation. The first two stages lay the foundations for the specification of the proposed system where key aspects of the system that have not previously been proven in the literature, were implemented to test the feasibility of the system. As a result of combining the existing knowledge in the area with the newlyverified key aspects of the proposed system, this research can form the base for a future software development project. The evaluation stage, which includes building the prototype system to test and evaluate the system performance based on the criteria defined in the earlier stage, is not within the scope this thesis. The research results in a conceptual design method and a proposed system software architecture. The proposed system is called the 'Hierarchical Evolutionary Algorithmic Design (HEAD) System'. The HEAD system has shown to be feasible through the initial illustrative paper-based simulation. The HEAD system consists of the two main components - 'Design Schema' and the 'Synthesis Algorithms'. The HEAD system reflects the major research contribution in the way it is conceptualised, while secondary contributions are achieved within the system components. The design schema provides constraints on the generation of designs, thus enabling the designer to create a wide range of potential designs that can then be analysed for desirable characteristics. The design schema supports the digital representation of the human creativity of designers into a dynamic design framework that can be encoded and then executed through the use of evolutionary genetic algorithms. The design schema incorporates 2D and 3D geometry and graph theory for space layout planning and building formation using the Lowest Common Design Denominator (LCDD) of a parameterised 2D module and a 3D structural module. This provides a bridge between the standard adjacency requirements and the evolutionary system. The use of graphs as an input to the evolutionary algorithm supports the introduction of constraints in a way that is not supported by standard evolutionary techniques. The process of design synthesis is guided as a higher level description of the building that supports geometrical constraints. The Synthesis Algorithms component analyses designs at four levels, 'Room', 'Layout', 'Building' and 'Optimisation'. At each level multiple fitness functions are embedded into the genetic algorithm to target the specific requirements of the relevant decomposed part of the design problem. Decomposing the design problem to allow for the design requirements of each level to be dealt with separately and then reassembling them in a bottom up approach reduces the generation of non-viable solutions through constraining the options available at the next higher level. The iterative approach, in exploring the range of design solutions through modification of the design schema as the understanding of the design problem improves, assists in identifying conflicts in the design requirements. Additionally, the hierarchical set-up allows the embedding of multiple fitness functions into the genetic algorithm, each relevant to a specific level. This supports an integrated multi-level, multi-disciplinary approach. The HEAD system promotes a collaborative relationship between human creativity and the computer capability. The design schema component, as the input to the procedural algorithms, enables the encoding of certain aspects of the designer's subjective creativity. By focusing on finding solutions for the relevant sub-problems at the appropriate levels of detail, the hierarchical nature of the system assist in the design decision-making process.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Calcium silicate (CaSiO3, CS) ceramics have received significant attention for application in bone regeneration due to their excellent in vitro apatite-mineralization ability; however, how to prepare porous CS scaffolds with a controllable pore structure for bone tissue engineering still remains a challenge. Conventional methods could not efficiently control the pore structure and mechanical strength of CS scaffolds, resulting in unstable in vivo osteogenesis. The aim of this study is to set out to solve these problems by applying a modified 3D-printing method to prepare highly uniform CS scaffolds with controllable pore structure and improved mechanical strength. The in vivo osteogenesis of the prepared 3D-printed CS scaffolds was further investigated by implanting them in the femur defects of rats. The results show that the CS scaffolds prepared by the modified 3D-printing method have uniform scaffold morphology. The pore size and pore structure of CS scaffolds can be efficiently adjusted. The compressive strength of 3D-printed CS scaffolds is around 120 times that of conventional polyurethane templated CS scaffolds. 3D-Printed CS scaffolds possess excellent apatite-mineralization ability in simulated body fluids. Micro-CT analysis has shown that 3D-printed CS scaffolds play an important role in assisting the regeneration of bone defects in vivo. The healing level of bone defects implanted by 3D-printed CS scaffolds is obviously higher than that of 3D-printed b-tricalcium phosphate (b-TCP) scaffolds at both 4 and 8 weeks. Hematoxylin and eosin (H&E) staining shows that 3D-printed CS scaffolds induce higher quality of the newly formed bone than 3D-printed b-TCP scaffolds. Immunohistochemical analyses have further shown that stronger expression of human type I collagen (COL1) and alkaline phosphate (ALP) in the bone matrix occurs in the 3D-printed CS scaffolds than in the 3D-printed b-TCP scaffolds. Considering these important advantages, such as controllable structure architecture, significant improvement in mechanical strength, excellent in vivo osteogenesis and since there is no need for second-time sintering, it is indicated that the prepared 3D-printed CS scaffolds are a promising material for application in bone regeneration.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background When large scale trials are investigating the effects of interventions on appetite, it is paramount to efficiently monitor large amounts of human data. The original hand-held Electronic Appetite Ratings System (EARS) was designed to facilitate the administering and data management of visual analogue scales (VAS) of subjective appetite sensations. The purpose of this study was to validate a novel hand-held method (EARS II (HP® iPAQ)) against the standard Pen and Paper (P&P) method and the previously validated EARS. Methods Twelve participants (5 male, 7 female, aged 18-40) were involved in a fully repeated measures design. Participants were randomly assigned in a crossover design, to either high fat (>48% fat) or low fat (<28% fat) meal days, one week apart and completed ratings using the three data capture methods ordered according to Latin Square. The first set of appetite sensations was completed in a fasted state, immediately before a fixed breakfast. Thereafter, appetite sensations were completed every thirty minutes for 4h. An ad libitum lunch was provided immediately before completing a final set of appetite sensations. Results Repeated measures ANOVAs were conducted for ratings of hunger, fullness and desire to eat. There were no significant differences between P&P compared with either EARS or EARS II (p > 0.05). Correlation coefficients between P&P and EARS II, controlling for age and gender, were performed on Area Under the Curve ratings. R2 for Hunger (0.89), Fullness (0.96) and Desire to Eat (0.95) were statistically significant (p < 0.05). Conclusions EARS II was sensitive to the impact of a meal and recovery of appetite during the postprandial period and is therefore an effective device for monitoring appetite sensations. This study provides evidence and support for further validation of the novel EARS II method for monitoring appetite sensations during large scale studies. The added versatility means that future uses of the system provides the potential to monitor a range of other behavioural and physiological measures often important in clinical and free living trials.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The major limitation of current typing methods for Streptococcus pyogenes, such as emm sequence typing and T typing, is that these are based on regions subject to considerable selective pressure. Multilocus sequence typing (MLST) is a better indicator of the genetic backbone of a strain but is not widely used due to high costs. The objective of this study was to develop a robust and cost-effective alternative to S. pyogenes MLST. A 10-member single nucleotide polymorphism (SNP) set that provides a Simpson’s Index of Diversity (D) of 0.99 with respect to the S. pyogenes MLST database was derived. A typing format involving high-resolution melting (HRM) analysis of small fragments nucleated by each of the resolution-optimized SNPs was developed. The fragments were 59–119 bp in size and, based on differences in G+C content, were predicted to generate three to six resolvable HRM curves. The combination of curves across each of the 10 fragments can be used to generate a melt type (MelT) for each sequence type (ST). The 525 STs currently in the S. pyogenes MLST database are predicted to resolve into 298 distinct MelTs and the method is calculated to provide a D of 0.996 against the MLST database. The MelTs are concordant with the S. pyogenes population structure. To validate the method we examined clinical isolates of S. pyogenes of 70 STs. Curves were generated as predicted by G+C content discriminating the 70 STs into 65 distinct MelTs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The research team recognized the value of network-level Falling Weight Deflectometer (FWD) testing to evaluate the structural condition trends of flexible pavements. However, practical limitations due to the cost of testing, traffic control and safety concerns and the ability to test a large network may discourage some agencies from conducting the network-level FWD testing. For this reason, the surrogate measure of the Structural Condition Index (SCI) is suggested for use. The main purpose of the research presented in this paper is to investigate data mining strategies and to develop a prediction method of the structural condition trends for network-level applications which does not require FWD testing. The research team first evaluated the existing and historical pavement condition, distress, ride, traffic and other data attributes in the Texas Department of Transportation (TxDOT) Pavement Maintenance Information System (PMIS), applied data mining strategies to the data, discovered useful patterns and knowledge for SCI value prediction, and finally provided a reasonable measure of pavement structural condition which is correlated to the SCI. To evaluate the performance of the developed prediction approach, a case study was conducted using the SCI data calculated from the FWD data collected on flexible pavements over a 5-year period (2005 – 09) from 354 PMIS sections representing 37 pavement sections on the Texas highway system. The preliminary study results showed that the proposed approach can be used as a supportive pavement structural index in the event when FWD deflection data is not available and help pavement managers identify the timing and appropriate treatment level of preventive maintenance activities.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The modern structural diagnosis process is rely on vibration characteristics to assess safer serviceability level of the structure. This paper examines the potential of change in flexibility method to use in damage detection process and two main practical constraints associated with it. The first constraint addressed in this paper is reduction in number of data acquisition points due to limited number of sensors. Results conclude that accuracy of the change in flexibility method is influenced by the number of data acquisition points/sensor locations in real structures. Secondly, the effect of higher modes on damage detection process has been studied. This addresses the difficulty of extracting higher order modal data with available sensors. Four damage indices have been presented to identify their potential of damage detection with respect to different locations and severity of damage. A simply supported beam with two degrees of freedom at each node is considered only for a single damage cases throughout the paper.