453 resultados para Iterative Methods


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Backgrounds Whether suicide in China has significant seasonal variations is unclear. The aim of this study is to examine the seasonality of suicide in Shandong China and to assess the associations of suicide seasonality with gender, residence, age and methods of suicide. Methods Three types of tests (Chi-square, Edwards' T and Roger's Log method) were used to detect the seasonality of the suicide data extracted from the official mortality data of Shandong Disease Surveillance Point (DSP) system. Peak/low ratios (PLRs) and 95% confidence intervals (CIs) were calculated to indicate the magnitude of seasonality. Results A statistically significant seasonality with a single peak in suicide rates in spring and early summer, and a dip in winter was observed, which remained relatively consistent over years. Regardless of gender, suicide seasonality was more pronounced in rural areas, younger age groups and for non-violent methods, in particular, self-poisoning by pesticide. Conclusions There are statistically significant seasonal variations of completed suicide for both men and women in Shandong, China. Differences exist between residence (urban/rural), age groups and suicide methods. Results appear to support a sociological explanation of suicide seasonality.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present an iterative hierarchical algorithm for multi-view stereo. The algorithm attempts to utilise as much contextual information as is available to compute highly accurate and robust depth maps. There are three novel aspects to the approach: 1) firstly we incrementally improve the depth fidelity as the algorithm progresses through the image pyramid; 2) secondly we show how to incorporate visual hull information (when available) to constrain depth searches; and 3) we show how to simultaneously enforce the consistency of the depth-map by continual comparison with neighbouring depth-maps. We show that this approach produces highly accurate depth-maps and, since it is essentially a local method, is both extremely fast and simple to implement.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Under pressure from both the ever increasing level of market competition and the global financial crisis, clients in consumer electronics (CE) industry are keen to understand how to choose the most appropriate procurement method and hence to improve their competitiveness. Four rounds of Delphi questionnaire survey were conducted with 12 experts in order to identify the most appropriate procurement method in the Hong Kong CE industry. Five key selection criteria in the CE industry are highlighted, including product quality, capability, price competition, flexibility and speed. This study also revealed that product quality was found to be the most important criteria for the “First type used commercially” and “Major functional improvements” projects. As for “Minor functional improvements” projects, price competition was the most crucial factor to be considered during the PP selection. These research findings provide owners with useful insights to select the procurement strategies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Research Interests: Are parents complying with the legislation? Is this the same for urban, regional and rural parents? Indigenous parents? What difficulties do parents experience in complying? Do parents understand why the legislation was put in place? Have there been negative consequences for other organisations or sectors of the community?

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Recent studies have started to explore context-awareness as a driver in the design of adaptable business processes. The emerging challenge of identifying and considering contextual drivers in the environment of a business process are well understood, however, typical methods used in business process modeling do not yet consider this additional contextual information in their process designs. In this chapter, we describe our research towards innovative and advanced process modeling methods that include mechanisms to incorporate relevant contextual drivers and their impacts on business processes in process design models. We report on our ongoing work with an Australian insurance provider and describe the design science we employed to develop these innovative and useful artifacts as part of a context-aware method framework. We discuss the utility of these artifacts in an application in the claims handling process at the case organization.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper reviews the current state in the application of infrared methods, particularly mid-infrared (mid-IR) and near infrared (NIR), for the evaluation of the structural and functional integrity of articular cartilage. It is noted that while a considerable amount of research has been conducted with respect to tissue characterization using mid-IR, it is almost certain that full-thickness cartilage assessment is not feasible with this method. On the contrary, the relatively more considerable penetration capacity of NIR suggests that it is a suitable candidate for full-thickness cartilage evaluation. Nevertheless, significant research is still required to improve the specificity and clinical applicability of the method if we are going to be able to use it for distinguishing between functional and dysfunctional cartilage.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: Integrating 3D virtual world technologies into educational subjects continues to draw the attention of educators and researchers alike. The focus of this study is the use of a virtual world, Second Life, in higher education teaching. In particular, it explores the potential of using a virtual world experience as a learning component situated within a curriculum delivered predominantly through face-to-face teaching methods. Purpose: This paper reports on a research study into the development of a virtual world learning experience designed for marketing students taking a Digital Promotions course. The experience was a field trip into Second Life to allow students to investigate how business branding practices were used for product promotion in this virtual world environment. The paper discusses the issues involved in developing and refining the virtual course component over four semesters. Methods: The study used a pedagogical action research approach, with iterative cycles of development, intervention and evaluation over four semesters. The data analysed were quantitative and qualitative student feedback collected after each field trip as well as lecturer reflections on each cycle. Sample: Small-scale convenience samples of second- and third-year students studying in a Bachelor of Business degree, majoring in marketing, taking the Digital Promotions subject at a metropolitan university in Queensland, Australia participated in the study. The samples included students who had and had not experienced the field trip. The numbers of students taking part in the field trip ranged from 22 to 48 across the four semesters. Findings and Implications: The findings from the four iterations of the action research plan helped identify key considerations for incorporating technologies into learning environments. Feedback and reflections from the students and lecturer suggested that an innovative learning opportunity had been developed. However, pedagogical potential was limited, in part, by technological difficulties and by student perceptions of relevance.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Design for Manufacturing (DFM) is a highly integral methodology in product development, starting from the concept development phase, with the aim of improving manufacturing productivity and maintaining product quality. While Design for Assembly (DFA) is focusing on elimination or combination of parts with other components (Boothroyd, Dewhurst and Knight, 2002), which in most cases relates to performing a function and manufacture operation in a simpler way, DFM is following a more holistic approach. During DFM, the considerable background work required for the conceptual phase is compensated for by a shortening of later development phases. Current DFM projects normally apply an iterative step-by-step approach and eventually transfer to the developer team. Although DFM has been a well established methodology for about 30 years, a Fraunhofer IAO study from 2009 found that DFM was still one of the key challenges of the German Manufacturing Industry. A new, knowledge based approach to DFM, eliminating steps of DFM, was introduced in Paul and Al-Dirini (2009). The concept focuses on a concurrent engineering process between the manufacturing engineering and product development systems, while current product realization cycles depend on a rigorous back-and-forth examine-and-correct approach so as to ensure compatibility of any proposed design to the DFM rules and guidelines adopted by the company. The key to achieving reductions is to incorporate DFM considerations into the early stages of the design process. A case study for DFM application in an automotive powertrain engineering environment is presented. It is argued that a DFM database needs to be interfaced to the CAD/CAM software, which will restrict designers to the DFM criteria. Consequently, a notable reduction of development cycles can be achieved. The case study is following the hypothesis that current DFM methods do not improve product design in a manner claimed by the DFM method. The critical case was to identify DFA/DFM recommendations or program actions with repeated appearance in different sources. Repetitive DFM measures are identified, analyzed and it is shown how a modified DFM process can mitigate a non-fully integrated DFM approach.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Purpose: To compare accuracies of different methods for calculating human lens power when lens thickness is not available. Methods: Lens power was calculated by four methods. Three methods were used with previously published biometry and refraction data of 184 emmetropic and myopic eyes of 184 subjects (age range [18, 63] years, spherical equivalent range [–12.38, +0.75] D). These three methods consist of the Bennett method, which uses lens thickness, our modification of the Stenström method and the Bennett¬Rabbetts method, both of which do not require knowledge of lens thickness. These methods include c constants, which represent distances from lens surfaces to principal planes. Lens powers calculated with these methods were compared with those calculated using phakometry data available for a subgroup of 66 emmetropic eyes (66 subjects). Results: Lens powers obtained from the Bennett method corresponded well with those obtained by phakometry for emmetropic eyes, although individual differences up to 3.5D occurred. Lens powers obtained from the modified¬Stenström and Bennett¬Rabbetts methods deviated significantly from those obtained with either the Bennett method or phakometry. Customizing the c constants improved this agreement, but applying these constants to the entire group gave mean lens power differences of 0.71 ± 0.56D compared with the Bennett method. By further optimizing the c constants, the agreement with the Bennett method was within ± 1D for 95% of the eyes. Conclusion: With appropriate constants, the modified¬Stenström and Bennett¬Rabbetts methods provide a good approximation of the Bennett lens power in emmetropic and myopic eyes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

During the course of several natural disasters in recent years, Twitter has been found to play an important role as an additional medium for many–to–many crisis communication. Emergency services are successfully using Twitter to inform the public about current developments, and are increasingly also attempting to source first–hand situational information from Twitter feeds (such as relevant hashtags). The further study of the uses of Twitter during natural disasters relies on the development of flexible and reliable research infrastructure for tracking and analysing Twitter feeds at scale and in close to real time, however. This article outlines two approaches to the development of such infrastructure: one which builds on the readily available open source platform yourTwapperkeeper to provide a low–cost, simple, and basic solution; and, one which establishes a more powerful and flexible framework by drawing on highly scaleable, state–of–the–art technology.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: Previous research identified that primary brain tumour patients have significant psychological morbidity and unmet needs, particularly the need for more information and support. However, the utility of strategies to improve information provision in this setting is unknown. This study involved the development and piloting of a brain tumour specific question prompt list (QPL). A QPL is a list of questions patients may find useful to ask their health professionals, and is designed to facilitate communication and information exchange. Methods: Thematic analysis of QPLs developed for other chronic diseases and brain tumour specific patient resources informed a draft QPL. Subsequent refinement of the QPL involved an iterative process of interviews and review with 12 recently diagnosed patients and six caregivers. Final revisions were made following readability analyses and review by health professionals. Piloting of the QPL is underway using a non-randomised control group trial with patients undergoing treatment for a primary brain tumour in Brisbane, Queensland. Following baseline interviews, consenting participants are provided with the QPL or standard information materials. Follow-up interviews four to 6 weeks later allow assessment of the acceptability of the QPL, how it is used by patients, impact on information needs, and feasibility of recruitment, implementation and outcome assessment. Results: The final QPL was determined to be readable at the sixth grade level. It contains seven sections: diagnosis, prognosis, symptoms and changes, the health professional team, support, treatment and management, and post-treatment concerns. At this time, fourteen participants have been recruited for the pilot, and data collection completed for eleven. Data collection and preliminary analysis are expected to be completed by and presented at the conference. Conclusions: If acceptable to participants, the QPL may encourage patients, doctors and nurses to communicate more effectively, reducing unmet information needs and ultimately improving psychological wellbeing.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The design of pre-contoured fracture fixation implants (plates and nails) that correctly fit the anatomy of a patient utilises 3D models of long bones with accurate geometric representation. 3D data is usually available from computed tomography (CT) scans of human cadavers that generally represent the above 60 year old age group. Thus, despite the fact that half of the seriously injured population comes from the 30 year age group and below, virtually no data exists from these younger age groups to inform the design of implants that optimally fit patients from these groups. Hence, relevant bone data from these age groups is required. The current gold standard for acquiring such data–CT–involves ionising radiation and cannot be used to scan healthy human volunteers. Magnetic resonance imaging (MRI) has been shown to be a potential alternative in the previous studies conducted using small bones (tarsal bones) and parts of the long bones. However, in order to use MRI effectively for 3D reconstruction of human long bones, further validations using long bones and appropriate reference standards are required. Accurate reconstruction of 3D models from CT or MRI data sets requires an accurate image segmentation method. Currently available sophisticated segmentation methods involve complex programming and mathematics that researchers are not trained to perform. Therefore, an accurate but relatively simple segmentation method is required for segmentation of CT and MRI data. Furthermore, some of the limitations of 1.5T MRI such as very long scanning times and poor contrast in articular regions can potentially be reduced by using higher field 3T MRI imaging. However, a quantification of the signal to noise ratio (SNR) gain at the bone - soft tissue interface should be performed; this is not reported in the literature. As MRI scanning of long bones has very long scanning times, the acquired images are more prone to motion artefacts due to random movements of the subject‟s limbs. One of the artefacts observed is the step artefact that is believed to occur from the random movements of the volunteer during a scan. This needs to be corrected before the models can be used for implant design. As the first aim, this study investigated two segmentation methods: intensity thresholding and Canny edge detection as accurate but simple segmentation methods for segmentation of MRI and CT data. The second aim was to investigate the usability of MRI as a radiation free imaging alternative to CT for reconstruction of 3D models of long bones. The third aim was to use 3T MRI to improve the poor contrast in articular regions and long scanning times of current MRI. The fourth and final aim was to minimise the step artefact using 3D modelling techniques. The segmentation methods were investigated using CT scans of five ovine femora. The single level thresholding was performed using a visually selected threshold level to segment the complete femur. For multilevel thresholding, multiple threshold levels calculated from the threshold selection method were used for the proximal, diaphyseal and distal regions of the femur. Canny edge detection was used by delineating the outer and inner contour of 2D images and then combining them to generate the 3D model. Models generated from these methods were compared to the reference standard generated using the mechanical contact scans of the denuded bone. The second aim was achieved using CT and MRI scans of five ovine femora and segmenting them using the multilevel threshold method. A surface geometric comparison was conducted between CT based, MRI based and reference models. To quantitatively compare the 1.5T images to the 3T MRI images, the right lower limbs of five healthy volunteers were scanned using scanners from the same manufacturer. The images obtained using the identical protocols were compared by means of SNR and contrast to noise ratio (CNR) of muscle, bone marrow and bone. In order to correct the step artefact in the final 3D models, the step was simulated in five ovine femora scanned with a 3T MRI scanner. The step was corrected using the iterative closest point (ICP) algorithm based aligning method. The present study demonstrated that the multi-threshold approach in combination with the threshold selection method can generate 3D models from long bones with an average deviation of 0.18 mm. The same was 0.24 mm of the single threshold method. There was a significant statistical difference between the accuracy of models generated by the two methods. In comparison, the Canny edge detection method generated average deviation of 0.20 mm. MRI based models exhibited 0.23 mm average deviation in comparison to the 0.18 mm average deviation of CT based models. The differences were not statistically significant. 3T MRI improved the contrast in the bone–muscle interfaces of most anatomical regions of femora and tibiae, potentially improving the inaccuracies conferred by poor contrast of the articular regions. Using the robust ICP algorithm to align the 3D surfaces, the step artefact that occurred by the volunteer moving the leg was corrected, generating errors of 0.32 ± 0.02 mm when compared with the reference standard. The study concludes that magnetic resonance imaging, together with simple multilevel thresholding segmentation, is able to produce 3D models of long bones with accurate geometric representations. The method is, therefore, a potential alternative to the current gold standard CT imaging.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis develops a detailed conceptual design method and a system software architecture defined with a parametric and generative evolutionary design system to support an integrated interdisciplinary building design approach. The research recognises the need to shift design efforts toward the earliest phases of the design process to support crucial design decisions that have a substantial cost implication on the overall project budget. The overall motivation of the research is to improve the quality of designs produced at the author's employer, the General Directorate of Major Works (GDMW) of the Saudi Arabian Armed Forces. GDMW produces many buildings that have standard requirements, across a wide range of environmental and social circumstances. A rapid means of customising designs for local circumstances would have significant benefits. The research considers the use of evolutionary genetic algorithms in the design process and the ability to generate and assess a wider range of potential design solutions than a human could manage. This wider ranging assessment, during the early stages of the design process, means that the generated solutions will be more appropriate for the defined design problem. The research work proposes a design method and system that promotes a collaborative relationship between human creativity and the computer capability. The tectonic design approach is adopted as a process oriented design that values the process of design as much as the product. The aim is to connect the evolutionary systems to performance assessment applications, which are used as prioritised fitness functions. This will produce design solutions that respond to their environmental and function requirements. This integrated, interdisciplinary approach to design will produce solutions through a design process that considers and balances the requirements of all aspects of the design. Since this thesis covers a wide area of research material, 'methodological pluralism' approach was used, incorporating both prescriptive and descriptive research methods. Multiple models of research were combined and the overall research was undertaken following three main stages, conceptualisation, developmental and evaluation. The first two stages lay the foundations for the specification of the proposed system where key aspects of the system that have not previously been proven in the literature, were implemented to test the feasibility of the system. As a result of combining the existing knowledge in the area with the newlyverified key aspects of the proposed system, this research can form the base for a future software development project. The evaluation stage, which includes building the prototype system to test and evaluate the system performance based on the criteria defined in the earlier stage, is not within the scope this thesis. The research results in a conceptual design method and a proposed system software architecture. The proposed system is called the 'Hierarchical Evolutionary Algorithmic Design (HEAD) System'. The HEAD system has shown to be feasible through the initial illustrative paper-based simulation. The HEAD system consists of the two main components - 'Design Schema' and the 'Synthesis Algorithms'. The HEAD system reflects the major research contribution in the way it is conceptualised, while secondary contributions are achieved within the system components. The design schema provides constraints on the generation of designs, thus enabling the designer to create a wide range of potential designs that can then be analysed for desirable characteristics. The design schema supports the digital representation of the human creativity of designers into a dynamic design framework that can be encoded and then executed through the use of evolutionary genetic algorithms. The design schema incorporates 2D and 3D geometry and graph theory for space layout planning and building formation using the Lowest Common Design Denominator (LCDD) of a parameterised 2D module and a 3D structural module. This provides a bridge between the standard adjacency requirements and the evolutionary system. The use of graphs as an input to the evolutionary algorithm supports the introduction of constraints in a way that is not supported by standard evolutionary techniques. The process of design synthesis is guided as a higher level description of the building that supports geometrical constraints. The Synthesis Algorithms component analyses designs at four levels, 'Room', 'Layout', 'Building' and 'Optimisation'. At each level multiple fitness functions are embedded into the genetic algorithm to target the specific requirements of the relevant decomposed part of the design problem. Decomposing the design problem to allow for the design requirements of each level to be dealt with separately and then reassembling them in a bottom up approach reduces the generation of non-viable solutions through constraining the options available at the next higher level. The iterative approach, in exploring the range of design solutions through modification of the design schema as the understanding of the design problem improves, assists in identifying conflicts in the design requirements. Additionally, the hierarchical set-up allows the embedding of multiple fitness functions into the genetic algorithm, each relevant to a specific level. This supports an integrated multi-level, multi-disciplinary approach. The HEAD system promotes a collaborative relationship between human creativity and the computer capability. The design schema component, as the input to the procedural algorithms, enables the encoding of certain aspects of the designer's subjective creativity. By focusing on finding solutions for the relevant sub-problems at the appropriate levels of detail, the hierarchical nature of the system assist in the design decision-making process.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper we extend the ideas of Brugnano, Iavernaro and Trigiante in their development of HBVM($s,r$) methods to construct symplectic Runge-Kutta methods for all values of $s$ and $r$ with $s\geq r$. However, these methods do not see the dramatic performance improvement that HBVMs can attain. Nevertheless, in the case of additive stochastic Hamiltonian problems an extension of these ideas, which requires the simulation of an independent Wiener process at each stage of a Runge-Kutta method, leads to methods that have very favourable properties. These ideas are illustrated by some simple numerical tests for the modified midpoint rule.