310 resultados para estimate
Resumo:
Virtual prototyping emerges as a new technology to replace existing physical prototypes for product evaluation, which are costly and time consuming to manufacture. Virtualization technology allows engineers and ergonomists to perform virtual builds and different ergonomic analyses on a product. Digital Human Modelling (DHM) software packages such as Siemens Jack, often integrate with CAD systems to provide a virtual environment which allows investigation of operator and product compatibility. Although the integration between DHM and CAD systems allows for the ergonomic analysis of anthropometric design, human musculoskeletal, multi-body modelling software packages such as the AnyBody Modelling System (AMS) are required to support physiologic design. They provide muscular force analysis, estimate human musculoskeletal strain and help address human comfort assessment. However, the independent characteristics of the modelling systems Jack and AMS constrain engineers and ergonomists in conducting a complete ergonomic analysis. AMS is a stand alone programming system without a capability to integrate into CAD environments. Jack is providing CAD integrated human-in-the-loop capability, but without considering musculoskeletal activity. Consequently, engineers and ergonomists need to perform many redundant tasks during product and process design. Besides, the existing biomechanical model in AMS uses a simplified estimation of body proportions, based on a segment mass ratio derived scaling approach. This is insufficient to represent user populations anthropometrically correct in AMS. In addition, sub-models are derived from different sources of morphologic data and are therefore anthropometrically inconsistent. Therefore, an interface between the biomechanical AMS and the virtual human model Jack was developed to integrate a musculoskeletal simulation with Jack posture modeling. This interface provides direct data exchange between the two man-models, based on a consistent data structure and common body model. The study assesses kinematic and biomechanical model characteristics of Jack and AMS, and defines an appropriate biomechanical model. The information content for interfacing the two systems is defined and a protocol is identified. The interface program is developed and implemented through Tcl and Jack-script(Python), and interacts with the AMS console application to operate AMS procedures.
Resumo:
Purpose: To determine likely errors in estimating retinal shape using partial coherence interferometric instruments when no allowance is made for optical distortion. Method: Errors were estimated using Gullstrand’s No. 1 schematic eye and variants which included a 10 D axial myopic eye, an emmetropic eye with a gradient-index lens, and a 10.9 D accommodating eye with a gradient-index lens. Performance was simulated for two commercial instruments, the IOLMaster (Carl Zeiss Meditec) and the Lenstar LS 900 (Haag-Streit AG). The incident beam was directed towards either the centre of curvature of the anterior cornea (corneal-direction method) or the centre of the entrance pupil (pupil-direction method). Simple trigonometry was used with the corneal intercept and the incident beam angle to estimate retinal contour. Conics were fitted to the estimated contours. Results: The pupil-direction method gave estimates of retinal contour that were much too flat. The cornea-direction method gave similar results for IOLMaster and Lenstar approaches. The steepness of the retinal contour was slightly overestimated, the exact effects varying with the refractive error, gradient index and accommodation. Conclusion: These theoretical results suggest that, for field angles ≤30º, partial coherence interferometric instruments are of use in estimating retinal shape by the corneal-direction method with the assumptions of a regular retinal shape and no optical distortion. It may be possible to improve on these estimates out to larger field angles by using optical modeling to correct for distortion.
Resumo:
Road traffic noise affects the quality of life in the areas adjoining the road. The effect of traffic noise on people is wide ranging and may include sleep disturbance and negative impact on work efficiency. To address the problem of traffic noise, it is necessary to estimate the noise level. For this, a number of noise estimation models have been developed which can estimate noise at the receptor points, based on simple configuration of buildings. However, for a real world situation we have multiple buildings forming built-up area. In such a situation, it is almost impossible to consider multiple diffractions and reflections in sound propagation from the source to the receptor point. An engineering solution to such a real world problem is needed to estimate noise levels in built-up area.
Resumo:
For the further noise reduction in the future, the traffic management which controls traffic flow and physical distribution is important. To conduct the measure by the traffic management effectively, it is necessary to apply the model for predicting the traffic flow in the citywide road network. For this purpose, the existing model named AVENUE was used as a macro-traffic flow prediction model. The traffic flow model was integrated with the road vehicles' sound power model, and the new road traffic noise prediction model was established. By using this prediction model, the noise map of entire city can be made. In this study, first, the change of traffic flow on the road network after the establishment of new roads was estimated, and the change of the road traffic noise caused by the new roads was predicted. As a result, it has been found that this prediction model has the ability to estimate the change of noise map by the traffic management. In addition, the macro-traffic flow model and our conventional micro-traffic flow model were combined, and the coverage of the noise prediction model was expanded.
Resumo:
Background Depression is a major public health problem worldwide and is currently ranked second to heart disease for years lost due to disability. For many decades, international research has found that depressive symptoms occur more frequently among low socioeconomic (SES) individuals than their more-advantaged peers. However, the reasons as to why those of low socioeconomic groups suffer more depressive symptoms are not well understood. Studies investigating the prevalence of depression and its association with SES emanate largely from developed countries, with little research among developing countries. In particular, there is a serious dearth of research on depression and no investigation of its association with SES in Vietnam. The aims of the research presented in this Thesis are to: estimate the prevalence of depressive symptoms among Vietnamese adults, examine the nature and extent of the association between SES and depression and to elucidate causal pathways linking SES to depressive symptoms Methods The research was conducted between September 2008 and November 2009 in Hue city in central Vietnam and used a combination of qualitative (in-depth interviews) and quantitative (survey) data collection methods. The qualitative study contributed to the development of the theoretical model and to the refinement of culturally-appropriate data collection instruments for the quantitative study. The main survey comprised a cross-sectional population–based survey with randomised cluster sampling. A sample of 1976 respondents aged between 25-55 years from ten randomly-selected residential zones (quarters) of Hue city completed the questionnaire (response rate 95.5%). Measures SES was classified using three indicators: education, occupation and income. The Center for Epidemiologic Studies-Depression (CES-D) scale was used to measure depressive symptoms (range0-51, mean=11.0, SD=8.5). Three cut-off points for the CES-D scores were applied: ‘at risk for clinical depression’ (16 or above), ‘depressive symptoms’ (above 21) and ‘depression’ (above 25). Six psychosocial indicators: life time trauma, chronic stress, recent life events, social support, self esteem, and mastery were hypothesized to mediate the association between SES and depressive symptoms. Analyses The prevalence of depressive symptoms were analysed using bivariate analyses. The multivariable analytic phase comprised of ordinary least squares regression, in accordance with Baron and Kenny’s three-step framework for mediation modeling. All analyses were adjusted for a range of confounders, including age, marital status, smoking, drinking and chronic diseases and the mediation models were stratified by gender. Results Among these Vietnamese adults, 24.3% were at or above the cut-off for being ‘at risk for clinical depression’, 11.9% were classified as having depressive symptoms and 6.8% were categorised as having depression. SES was inversely related to depressive symptoms: the least educated those with low occupational status or with the lowest incomes reported more depressive symptoms. Socioeconomicallydisadvantaged individuals were more likely to report experiencing stress (life time trauma, chronic stress or recent life events), perceived less social support and reported fewer personal resources (self esteem and mastery) than their moreadvantaged counterparts. These psychosocial resources were all significantly associated with depressive symptoms independent of SES. Each psychosocial factor showed a significant mediating effect on the association between SES and depressive symptoms. This was found for all measures of SES, and for males and females. In particular, personal resources (mastery, self esteem) and chronic stress accounted for a substantial proportion of the variation in depressive symptoms between socioeconomic groups. Social support and recent life events contributed modestly to socioeconomic differences in depressive symptoms, whereas lifetime trauma contributed the least to these inequalities. Conclusion This is the first known study in Vietnam or any developing country to systematically examine the extent to which psychosocial factors mediate the relationship between SES and depression. The study contributes new evidence regarding the burden of depression in Vietnam. The findings have practical relevance for advocacy, for mental health promotion and health-care services, and point to the need for programs that focus on building a sense of personal mastery and self esteem. More broadly, the work presented in this Thesis contributes to the international scientific literature on the social determinants of depression.
Resumo:
Objective: To (1) search the English-language literature for original research addressing the effect of cryotherapy on joint position sense (JPS) and (2) make recommendations regarding how soon healthy athletes can safely return to participation after cryotherapy. Data Sources: We performed an exhaustive search for original research using the AMED, CINAHL, MEDLINE, and SportDiscus databases from 1973 to 2009 to gather information on cryotherapy and JPS. Key words used were cryotherapy and proprioception, cryotherapy and joint position sense, cryotherapy, and proprioception. Study Selection: The inclusion criteria were (1) the literature was written in English, (2) participants were human, (3) an outcome measure included JPS, (4) participants were healthy, and (5) participants were tested immediately after a cryotherapy application to a joint. Data Extraction: The means and SDs of the JPS outcome measures were extracted and used to estimate the effect size (Cohen d) and associated 95% confidence intervals for comparisons of JPS before and after a cryotherapy treatment. The numbers, ages, and sexes of participants in all 7 selected studies were also extracted. Data Synthesis: The JPS was assessed in 3 joints: ankle (n 5 2), knee (n 5 3), and shoulder (n 5 2). The average effect size for the 7 included studies was modest, with effect sizes ranging from 20.08 to 1.17, with a positive number representing an increase in JPS error. The average methodologic score of the included studies was 5.4/10 (range, 5–6) on the Physiotherapy Evidence Database scale. Conclusions: Limited and equivocal evidence is available to address the effect of cryotherapy on proprioception in the form of JPS. Until further evidence is provided, clinicians should be cautious when returning individuals to tasks requiring components of proprioceptive input immediately after a cryotherapy treatment.
Resumo:
Individual-based models describing the migration and proliferation of a population of cells frequently restrict the cells to a predefined lattice. An implicit assumption of this type of lattice based model is that a proliferative population will always eventually fill the lattice. Here we develop a new lattice-free individual-based model that incorporates cell-to-cell crowding effects. We also derive approximate mean-field descriptions for the lattice-free model in two special cases motivated by commonly used experimental setups. Lattice-free simulation results are compared to these mean-field descriptions and to a corresponding lattice-based model. Data from a proliferation experiment is used to estimate the parameters for the new model, including the cell proliferation rate, showing that the model fits the data well. An important aspect of the lattice-free model is that the confluent cell density is not predefined, as with lattice-based models, but an emergent model property. As a consequence of the more realistic, irregular configuration of cells in the lattice-free model, the population growth rate is much slower at high cell densities and the population cannot reach the same confluent density as an equivalent lattice-based model.
Resumo:
Prevention and safety promotion programmes. Traditionally, in-depth investigations of crash risks are conducted using exposure controlled study or case-control methodology. However, these studies need either observational data for control cases or exogenous exposure data like vehicle-kilometres travel, entry flow or product of conflicting flow for a particular traffic location, or a traffic site. These data are not readily available and often require extensive data collection effort on a system-wide basis. Aim: The objective of this research is to propose an alternative methodology to investigate crash risks of a road user group in different circumstances using readily available traffic police crash data. Methods: This study employs a combination of a log-linear model and the quasi-induced exposure technique to estimate crash risks of a road user group. While the log-linear model reveals the significant interactions and thus the prevalence of crashes of a road user group under various sets of traffic, environmental and roadway factors, the quasi-induced exposure technique estimates relative exposure of that road user in the same set of explanatory variables. Therefore, the combination of these two techniques provides relative measures of crash risks under various influences of roadway, environmental and traffic conditions. The proposed methodology has been illustrated using Brisbane motorcycle crash data of five years. Results: Interpretations of results on different combination of interactive factors show that the poor conspicuity of motorcycles is a predominant cause of motorcycle crashes. Inability of other drivers to correctly judge the speed and distance of an oncoming motorcyclist is also evident in right-of-way violation motorcycle crashes at intersections. Discussion and Conclusions: The combination of a log-linear model and the induced exposure technique is a promising methodology and can be applied to better estimate crash risks of other road users. This study also highlights the importance of considering interaction effects to better understand hazardous situations. A further study on the comparison between the proposed methodology and case-control method would be useful.
Resumo:
For over half a century, it has been known that the rate of morphological evolution appears to vary with the time frame of measurement. Rates of microevolutionary change, measured between successive generations, were found to be far higher than rates of macroevolutionary change inferred from the fossil record. More recently, it has been suggested that rates of molecular evolution are also time dependent, with the estimated rate depending on the timescale of measurement. This followed surprising observations that estimates of mutation rates, obtained in studies of pedigrees and laboratory mutation-accumulation lines, exceeded long-term substitution rates by an order of magnitude or more. Although a range of studies have provided evidence for such a pattern, the hypothesis remains relatively contentious. Furthermore, there is ongoing discussion about the factors that can cause molecular rate estimates to be dependent on time. Here we present an overview of our current understanding of time-dependent rates. We provide a summary of the evidence for time-dependent rates in animals, bacteria and viruses. We review the various biological and methodological factors that can cause rates to be time dependent, including the effects of natural selection, calibration errors, model misspecification and other artefacts. We also describe the challenges in calibrating estimates of molecular rates, particularly on the intermediate timescales that are critical for an accurate characterization of time-dependent rates. This has important consequences for the use of molecular-clock methods to estimate timescales of recent evolutionary events.
Resumo:
In phylogenetics, the unrooted model of phylogeny and the strict molecular clock model are two extremes of a continuum. Despite their dominance in phylogenetic inference, it is evident that both are biologically unrealistic and that the real evolutionary process lies between these two extremes. Fortunately, intermediate models employing relaxed molecular clocks have been described. These models open the gate to a new field of “relaxed phylogenetics.” Here we introduce a new approach to performing relaxed phylogenetic analysis. We describe how it can be used to estimate phylogenies and divergence times in the face of uncertainty in evolutionary rates and calibration times. Our approach also provides a means for measuring the clocklikeness of datasets and comparing this measure between different genes and phylogenies. We find no significant rate autocorrelation among branches in three large datasets, suggesting that autocorrelated models are not necessarily suitable for these data. In addition, we place these datasets on the continuum of clocklikeness between a strict molecular clock and the alternative unrooted extreme. Finally, we present analyses of 102 bacterial, 106 yeast, 61 plant, 99 metazoan, and 500 primate alignments. From these we conclude that our method is phylogenetically more accurate and precise than the traditional unrooted model while adding the ability to infer a timescale to evolution.
Resumo:
Despite recent methodological advances in inferring the time-scale of biological evolution from molecular data, the fundamental question of whether our substitution models are sufficiently well specified to accurately estimate branch-lengths has received little attention. I examine this implicit assumption of all molecular dating methods, on a vertebrate mitochondrial protein-coding dataset. Comparison with analyses in which the data are RY-coded (AG → R; CT → Y) suggests that even rates-across-sites maximum likelihood greatly under-compensates for multiple substitutions among the standard (ACGT) NT-coded data, which has been subject to greater phylogenetic signal erosion. Accordingly, the fossil record indicates that branch-lengths inferred from the NT-coded data translate into divergence time overestimates when calibrated from deeper in the tree. Intriguingly, RY-coding led to the opposite result. The underlying NT and RY substitution model misspecifications likely relate respectively to “hidden” rate heterogeneity and changes in substitution processes across the tree, for which I provide simulated examples. Given the magnitude of the inferred molecular dating errors, branch-length estimation biases may partly explain current conflicts with some palaeontological dating estimates.
Resumo:
Cockatoos are the distinctive family Cacatuidae, a major lineage of the order of parrots (Psittaciformes) and distributed throughout the Australasian region of the world. However, the evolutionary history of cockatoos is not well understood. We investigated the phylogeny of cockatoos based on three mitochondrial and three nuclear DNA genes obtained from 16 of 21 species of Cacatuidae. In addition, five novel mitochondrial genomes were used to estimate time of divergence and our estimates indicate Cacatuidae diverged from Psittacidae approximately 40.7 million years ago (95% CI 51.6–30.3 Ma) during the Eocene. Our data shows Cacatuidae began to diversify approximately 27.9 Ma (95% CI 38.1–18.3 Ma) during the Oligocene. The early to middle Miocene (20–10 Ma) was a significant period in the evolution of modern Australian environments and vegetation, in which a transformation from mainly mesic to xeric habitats (e.g., fire-adapted sclerophyll vegetation and grasslands) occurred. We hypothesize that this environmental transformation was a driving force behind the diversification of cockatoos. A detailed multi-locus molecular phylogeny enabled us to resolve the phylogenetic placements of the Palm Cockatoo (Probosciger aterrimus), Galah (Eolophus roseicapillus), Gang-gang Cockatoo (Callocephalon fimbriatum) and Cockatiel (Nymphicus hollandicus), which have historically been difficult to place within Cacatuidae. When the molecular evidence is analysed in concert with morphology, it is clear that many of the cockatoo species’ diagnostic phenotypic traits such as plumage colour, body size, wing shape and bill morphology have evolved in parallel or convergently across lineages.
Resumo:
The opening phrase of the title is from Charles Darwin’s notebooks (Schweber 1977). It is a double reminder, firstly that mainstream evolutionary theory is not just about describing nature but is particularly looking for mechanisms or ‘causes’, and secondly, that there will usually be several causes affecting any particular outcome. The second part of the title is our concern at the almost universal rejection of the idea that biological mechanisms are sufficient for macroevolutionary changes, thus rejecting a cornerstone of Darwinian evolutionary theory. Our primary aim here is to consider ways of making it easier to develop and to test hypotheses about evolution. Formalizing hypotheses can help generate tests. In an absolute sense, some of the discussion by scientists about evolution is little better than the lack of reasoning used by those advocating intelligent design. Our discussion here is in a Popperian framework where science is defined by that area of study where it is possible, in principle, to find evidence against hypotheses – they are in principle falsifiable. However, with time, the boundaries of science keep expanding. In the past, some aspects of evolution were outside the current boundaries of falsifiable science, but increasingly new techniques and ideas are expanding the boundaries of science and it is appropriate to re-examine some topics. It often appears that over the last few decades there has been an increasingly strong assumption to look first (and only) for a physical cause. This decision is virtually never formally discussed, just an assumption is made that some physical factor ‘drives’ evolution. It is necessary to examine our assumptions much more carefully. What is meant by physical factors ‘driving’ evolution, or what is an ‘explosive radiation’. Our discussion focuses on two of the six mass extinctions, the fifth being events in the Late Cretaceous, and the sixth starting at least 50,000 years ago (and is ongoing). Cretaceous/Tertiary boundary; the rise of birds and mammals. We have had a long-term interest (Cooper and Penny 1997) in designing tests to help evaluate whether the processes of microevolution are sufficient to explain macroevolution. The real challenge is to formulate hypotheses in a testable way. For example the numbers of lineages of birds and mammals that survive from the Cretaceous to the present is one test. Our first estimate was 22 for birds, and current work is tending to increase this value. This still does not consider lineages that survived into the Tertiary, and then went extinct later. Our initial suggestion was probably too narrow in that it lumped four models from Penny and Phillips (2004) into one model. This reduction is too simplistic in that we need to know about survival and ecological and morphological divergences during the Late Cretaceous, and whether Crown groups of avian or mammalian orders may have existed back into the Cretaceous. More recently (Penny and Phillips 2004) we have formalized hypotheses about dinosaurs and pterosaurs, with the prediction that interactions between mammals (and groundfeeding birds) and dinosaurs would be most likely to affect the smallest dinosaurs, and similarly interactions between birds and pterosaurs would particularly affect the smaller pterosaurs. There is now evidence for both classes of interactions, with the smallest dinosaurs and pterosaurs declining first, as predicted. Thus, testable models are now possible. Mass extinction number six: human impacts. On a broad scale, there is a good correlation between time of human arrival, and increased extinctions (Hurles et al. 2003; Martin 2005; Figure 1). However, it is necessary to distinguish different time scales (Penny 2005) and on a finer scale there are still large numbers of possibilities. In Hurles et al. (2003) we mentioned habitat modification (including the use of Geogenes III July 2006 31 fire), introduced plants and animals (including kiore) in addition to direct predation (the ‘overkill’ hypothesis). We need also to consider prey switching that occurs in early human societies, as evidenced by the results of Wragg (1995) on the middens of different ages on Henderson Island in the Pitcairn group. In addition, the presence of human-wary or humanadapted animals will affect the distribution in the subfossil record. A better understanding of human impacts world-wide, in conjunction with pre-scientific knowledge will make it easier to discuss the issues by removing ‘blame’. While continued spontaneous generation was accepted universally, there was the expectation that animals continued to reappear. New Zealand is one of the very best locations in the world to study many of these issues. Apart from the marine fossil record, some human impact events are extremely recent and the remains less disrupted by time.
Resumo:
In this work we present an optimized fuzzy visual servoing system for obstacle avoidance using an unmanned aerial vehicle. The cross-entropy theory is used to optimise the gains of our controllers. The optimization process was made using the ROS-Gazebo 3D simulation with purposeful extensions developed for our experiments. Visual servoing is achieved through an image processing front-end that uses the Camshift algorithm to detect and track objects in the scene. Experimental flight trials using a small quadrotor were performed to validate the parameters estimated from simulation. The integration of cross- entropy methods is a straightforward way to estimate optimal gains achieving excellent results when tested in real flights.
Resumo:
Affine covariant local image features are a powerful tool for many applications, including matching and calibrating wide baseline images. Local feature extractors that use a saliency map to locate features require adaptation processes in order to extract affine covariant features. The most effective extractors make use of the second moment matrix (SMM) to iteratively estimate the affine shape of local image regions. This paper shows that the Hessian matrix can be used to estimate local affine shape in a similar fashion to the SMM. The Hessian matrix requires significantly less computation effort than the SMM, allowing more efficient affine adaptation. Experimental results indicate that using the Hessian matrix in conjunction with a feature extractor that selects features in regions with high second order gradients delivers equivalent quality correspondences in less than 17% of the processing time, compared to the same extractor using the SMM.