237 resultados para non linear pedagogy
Resumo:
Introduction: Understanding the mechanical properties of tendon is an important step to guiding the process of improving athletic performance, predicting injury and treating tendinopathies. The speed of sound in a medium is governed by the bulk modulus and density for fluids and isotropic materials. However, for tendon,which is a structural composite of fluid and collagen, there is some anisotropy requiring an adjustment for Poisson’s ratio. In this paper, these relationships are explored and modelled using data collected, in vivo, on human Achilles tendon. Estimates for elastic modulus and hysteresis based on speed of sound data are then compared against published values from in vitro mechanical tests. Methods: Measurements using clinical ultrasound imaging, inverse dynamics and acoustic transmission techniques were used to determine dimensions, loading conditions and longitudinal speed of sound for the Achilles tendon during a series of isometric plantar flexion exercises against body weight. Upper and lower bounds for speed of sound versus tensile stress in the tendon were then modelled and estimates derived for elastic modulus and hysteresis. Results: Axial speed of sound varied between 1850 to 2090 m.s−1 with a non-linear, asymptotic dependency on the level of tensile stress in the tendon 5–35 MPa. Estimates derived for the elastic modulus ranged between 1–2 GPa. Hysteresis derived from models of the stress-strain relationship, ranged from 3–11%. These values agree closely with those previously reported from direct measurements obtained via in vitro mechanical tensile tests on major weight bearing tendons. Discussion: There is sufficiently good agreement between these indirect (speed of sound derived) and direct (mechanical tensile test derived) measures of tendon mechanical properties to validate the use of this non-invasive acoustic transmission technique. This non-invasive method is suitable for monitoring changes in tendon properties as predictors of athletic performance, injury or therapeutic progression.
Resumo:
This study investigates whether and how a firm’s ownership and corporate governance affect its timeliness of price discovery, which is referred to as the speed of incorporation of value-relevant information into the stock price. Using a panel data of 1,138 Australian firm-year observations from 2001 to 2008, we predict and find a non-linear relationship between ownership concentration and the timeliness of price discovery. We test the identity of the largest shareholder and find that only firms with family as the largest shareholder exhibit faster price discovery. There is no evidence that suggests that the presence of a second largest shareholder affects the timeliness of price discovery materially. Although we find a positive association between corporate governance quality and the timeliness of price discovery, as expected, there is no interaction effect between the largest shareholding and corporate governance in relation to the timeliness of price discovery. Further tests show no evidence of severe endogeneity problems in our study.
Resumo:
Robust hashing is an emerging field that can be used to hash certain data types in applications unsuitable for traditional cryptographic hashing methods. Traditional hashing functions have been used extensively for data/message integrity, data/message authentication, efficient file identification and password verification. These applications are possible because the hashing process is compressive, allowing for efficient comparisons in the hash domain but non-invertible meaning hashes can be used without revealing the original data. These techniques were developed with deterministic (non-changing) inputs such as files and passwords. For such data types a 1-bit or one character change can be significant, as a result the hashing process is sensitive to any change in the input. Unfortunately, there are certain applications where input data are not perfectly deterministic and minor changes cannot be avoided. Digital images and biometric features are two types of data where such changes exist but do not alter the meaning or appearance of the input. For such data types cryptographic hash functions cannot be usefully applied. In light of this, robust hashing has been developed as an alternative to cryptographic hashing and is designed to be robust to minor changes in the input. Although similar in name, robust hashing is fundamentally different from cryptographic hashing. Current robust hashing techniques are not based on cryptographic methods, but instead on pattern recognition techniques. Modern robust hashing algorithms consist of feature extraction followed by a randomization stage that introduces non-invertibility and compression, followed by quantization and binary encoding to produce a binary hash output. In order to preserve robustness of the extracted features, most randomization methods are linear and this is detrimental to the security aspects required of hash functions. Furthermore, the quantization and encoding stages used to binarize real-valued features requires the learning of appropriate quantization thresholds. How these thresholds are learnt has an important effect on hashing accuracy and the mere presence of such thresholds are a source of information leakage that can reduce hashing security. This dissertation outlines a systematic investigation of the quantization and encoding stages of robust hash functions. While existing literature has focused on the importance of quantization scheme, this research is the first to emphasise the importance of the quantizer training on both hashing accuracy and hashing security. The quantizer training process is presented in a statistical framework which allows a theoretical analysis of the effects of quantizer training on hashing performance. This is experimentally verified using a number of baseline robust image hashing algorithms over a large database of real world images. This dissertation also proposes a new randomization method for robust image hashing based on Higher Order Spectra (HOS) and Radon projections. The method is non-linear and this is an essential requirement for non-invertibility. The method is also designed to produce features more suited for quantization and encoding. The system can operate without the need for quantizer training, is more easily encoded and displays improved hashing performance when compared to existing robust image hashing algorithms. The dissertation also shows how the HOS method can be adapted to work with biometric features obtained from 2D and 3D face images.
Resumo:
Despite its potential multiple contributions to sustainable policy objectives, urban transit is generally not widely used by the public in terms of its market share compared to that of automobiles, particularly in affluent societies with low-density urban forms like Australia. Transit service providers need to attract more people to transit by improving transit quality of service. The key to cost-effective transit service improvements lies in accurate evaluation of policy proposals by taking into account their impacts on transit users. If transit providers knew what is more or less important to their customers, they could focus their efforts on optimising customer-oriented service. Policy interventions could also be specified to influence transit users’ travel decisions, with targets of customer satisfaction and broader community welfare. This significance motivates the research into the relationship between urban transit quality of service and its user perception as well as behaviour. This research focused on two dimensions of transit user’s travel behaviour: route choice and access arrival time choice. The study area chosen was a busy urban transit corridor linking Brisbane central business district (CBD) and the St. Lucia campus of The University of Queensland (UQ). This multi-system corridor provided a ‘natural experiment’ for transit users between the CBD and UQ, as they can choose between busway 109 (with grade-separate exclusive right-of-way), ordinary on-street bus 412, and linear fast ferry CityCat on the Brisbane River. The population of interest was set as the attendees to UQ, who travelled from the CBD or from a suburb via the CBD. Two waves of internet-based self-completion questionnaire surveys were conducted to collect data on sampled passengers’ perception of transit service quality and behaviour of using public transit in the study area. The first wave survey is to collect behaviour and attitude data on respondents’ daily transit usage and their direct rating of importance on factors of route-level transit quality of service. A series of statistical analyses is conducted to examine the relationships between transit users’ travel and personal characteristics and their transit usage characteristics. A factor-cluster segmentation procedure is applied to respodents’ importance ratings on service quality variables regarding transit route preference to explore users’ various perspectives to transit quality of service. Based on the perceptions of service quality collected from the second wave survey, a series of quality criteria of the transit routes under study was quantitatively measured, particularly, the travel time reliability in terms of schedule adherence. It was proved that mixed traffic conditions and peak-period effects can affect transit service reliability. Multinomial logit models of transit user’s route choice were estimated using route-level service quality perceptions collected in the second wave survey. Relative importance of service quality factors were derived from choice model’s significant parameter estimates, such as access and egress times, seat availability, and busway system. Interpretations of the parameter estimates were conducted, particularly the equivalent in-vehicle time of access and egress times, and busway in-vehicle time. Market segmentation by trip origin was applied to investigate the difference in magnitude between the parameter estimates of access and egress times. The significant costs of transfer in transit trips were highlighted. These importance ratios were applied back to quality perceptions collected as RP data to compare the satisfaction levels between the service attributes and to generate an action relevance matrix to prioritise attributes for quality improvement. An empirical study on the relationship between average passenger waiting time and transit service characteristics was performed using the service quality perceived. Passenger arrivals for services with long headways (over 15 minutes) were found to be obviously coordinated with scheduled departure times of transit vehicles in order to reduce waiting time. This drove further investigations and modelling innovations in passenger’ access arrival time choice and its relationships with transit service characteristics and average passenger waiting time. Specifically, original contributions were made in formulation of expected waiting time, analysis of the risk-aversion attitude to missing desired service run in the passengers’ access time arrivals’ choice, and extensions of the utility function specification for modelling passenger access arrival distribution, by using complicated expected utility forms and non-linear probability weighting to explicitly accommodate the risk of missing an intended service and passenger’s risk-aversion attitude. Discussions on this research’s contributions to knowledge, its limitations, and recommendations for future research are provided at the concluding section of this thesis.
Resumo:
The reliability analysis is crucial to reducing unexpected down time, severe failures and ever tightened maintenance budget of engineering assets. Hazard based reliability methods are of particular interest as hazard reflects the current health status of engineering assets and their imminent failure risks. Most existing hazard models were constructed using the statistical methods. However, these methods were established largely based on two assumptions: one is the assumption of baseline failure distributions being accurate to the population concerned and the other is the assumption of effects of covariates on hazards. These two assumptions may be difficult to achieve and therefore compromise the effectiveness of hazard models in the application. To address this issue, a non-linear hazard modelling approach is developed in this research using neural networks (NNs), resulting in neural network hazard models (NNHMs), to deal with limitations due to the two assumptions for statistical models. With the success of failure prevention effort, less failure history becomes available for reliability analysis. Involving condition data or covariates is a natural solution to this challenge. A critical issue for involving covariates in reliability analysis is that complete and consistent covariate data are often unavailable in reality due to inconsistent measuring frequencies of multiple covariates, sensor failure, and sparse intrusive measurements. This problem has not been studied adequately in current reliability applications. This research thus investigates such incomplete covariates problem in reliability analysis. Typical approaches to handling incomplete covariates have been studied to investigate their performance and effects on the reliability analysis results. Since these existing approaches could underestimate the variance in regressions and introduce extra uncertainties to reliability analysis, the developed NNHMs are extended to include handling incomplete covariates as an integral part. The extended versions of NNHMs have been validated using simulated bearing data and real data from a liquefied natural gas pump. The results demonstrate the new approach outperforms the typical incomplete covariates handling approaches. Another problem in reliability analysis is that future covariates of engineering assets are generally unavailable. In existing practices for multi-step reliability analysis, historical covariates were used to estimate the future covariates. Covariates of engineering assets, however, are often subject to substantial fluctuation due to the influence of both engineering degradation and changes in environmental settings. The commonly used covariate extrapolation methods thus would not be suitable because of the error accumulation and uncertainty propagation. To overcome this difficulty, instead of directly extrapolating covariate values, projection of covariate states is conducted in this research. The estimated covariate states and unknown covariate values in future running steps of assets constitute an incomplete covariate set which is then analysed by the extended NNHMs. A new assessment function is also proposed to evaluate risks of underestimated and overestimated reliability analysis results. A case study using field data from a paper and pulp mill has been conducted and it demonstrates that this new multi-step reliability analysis procedure is able to generate more accurate analysis results.
Resumo:
Application of "advanced analysis" methods suitable for non-linear analysis and design of steel frame structures permits direct and accurate determination of ultimate system strengths, without resort to simplified elastic methods of analysis and semi-empirical specification equations. However, the application of advanced analysis methods has previously been restricted to steel frames comprising only compact sections that are not influenced by the effects of local buckling. A research project has been conducted with the aim of developing concentrated plasticity methods suitable for practical advanced analysis of steel frame structures comprising non-compact sections. This paper contains a comprehensive set of analytical benchmark solutions for steel frames comprising non-compact sections, which can be used to verify the accuracy of simplified concentrated plasticity methods of advanced analysis. The analytical benchmark solutions were obtained using a distributed plasticity shell finite element model that explicitly accounts for the effects of gradual cross-sectional yielding, longitudinal spread of plasticity, initial geometric imperfections, residual stresses, and local buckling. A brief description and verification of the shell finite element model is provided in this paper.
Extreme temperatures and emergency department admissions for childhood asthma in Brisbane, Australia
Resumo:
Objectives To examine the effect of extreme temperatures on emergency department admissions (EDAs) for childhood asthma. Methods An ecological design was used in this study. A Poisson linear regression model combined with a distributed lag non-linear model was used to quantify the effect of temperature on EDAs for asthma among children aged 0–14 years in Brisbane, Australia, during January 2003–December 2009, while controlling for air pollution, relative humidity, day of the week, season and long-term trends. The model residuals were checked to identify whether there was an added effect due to heat waves or cold spells. Results There were 13 324 EDAs for childhood asthma during the study period. Both hot and cold temperatures were associated with increases in EDAs for childhood asthma, and their effects both appeared to be acute. An added effect of heat waves on EDAs for childhood asthma was observed, but no added effect of cold spells was found. Male children and children aged 0–4 years were most vulnerable to heat effects, while children aged 10–14 years were most vulnerable to cold effects. Conclusions Both hot and cold temperatures seemed to affect EDAs for childhood asthma. As climate change continues, children aged 0–4 years are at particular risk for asthma.
Resumo:
Osteocyte cells are the most abundant cells in human bone tissue. Due to their unique morphology and location, osteocyte cells are thought to act as regulators in the bone remodelling process, and are believed to play an important role in astronauts’ bone mass loss after long-term space missions. There is increasing evidence showing that an osteocyte’s functions are highly affected by its morphology. However, changes in an osteocyte’s morphology under an altered gravity environment are still not well documented. Several in vitro studies have been recently conducted to investigate the morphological response of osteocyte cells to the microgravity environment, where osteocyte cells were cultured on a two-dimensional flat surface for at least 24 hours before microgravity experiments. Morphology changes of osteocyte cells in microgravity were then studied by comparing the cell area to 1g control cells. However, osteocyte cells found in vivo are with a more 3D morphology, and both cell body and dendritic processes are found sensitive to mechanical loadings. A round shape osteocyte’s cells support a less stiff cytoskeleton and are more sensitive to mechanical stimulations compared with flat cellular morphology. Thus, the relative flat and spread shape of isolated osteocytes in 2D culture may greatly hamper their sensitivity to a mechanical stimulus, and the lack of knowledge on the osteocyte’s morphological characteristics in culture may lead to subjective and noncomprehensive conclusions of how altered gravity impacts on an osteocyte’s morphology. Through this work empirical models were developed to quantitatively predicate the changes of morphology in osteocyte cell lines (MLO-Y4) in culture, and the response of osteocyte cells, which are relatively round in shape, to hyper-gravity stimulation has also been investigated. The morphology changes of MLO-Y4 cells in culture were quantified by measuring cell area and three dimensionless shape features including aspect ratio, circularity and solidity by using widely accepted image analysis software (ImageJTM). MLO-Y4 cells were cultured at low density (5×103 per well) and the changes in morphology were recorded over 10 hours. Based on the data obtained from the imaging analysis, empirical models were developed using the non-linear regression method. The developed empirical models accurately predict the morphology of MLO-Y4 cells for different culture times and can, therefore, be used as a reference model for analysing MLO-Y4 cell morphology changes within various biological/mechanical studies, as necessary. The morphological response of MLO-Y4 cells with a relatively round morphology to hyper-gravity environment has been investigated using a centrifuge. After 2 hours culture, MLO-Y4 cells were exposed to 20g for 30mins. Changes in the morphology of MLO-Y4 cells are quantitatively analysed by measuring the average value of cell area and dimensionless shape factors such as aspect ratio, solidity and circularity. In this study, no significant morphology changes were detected in MLO-Y4 cells under a hyper-gravity environment (20g for 30 mins) compared with 1g control cells.
Resumo:
This thesis developed semi-parametric regression models for estimating the spatio-temporal distribution of outdoor airborne ultrafine particle number concentration (PNC). The models developed incorporate multivariate penalised splines and random walks and autoregressive errors in order to estimate non-linear functions of space, time and other covariates. The models were applied to data from the "Ultrafine Particles from Traffic Emissions and Child" project in Brisbane, Australia, and to longitudinal measurements of air quality in Helsinki, Finland. The spline and random walk aspects of the models reveal how the daily trend in PNC changes over the year in Helsinki and the similarities and differences in the daily and weekly trends across multiple primary schools in Brisbane. Midday peaks in PNC in Brisbane locations are attributed to new particle formation events at the Port of Brisbane and Brisbane Airport.
Resumo:
As Earth's climate is rapidly changing, the impact of ambient temperature on health outcomes has attracted increasing attention in the recent time. Considerable number of excess deaths has been reported because of exposure to ambient hot and cold temperatures. However, relatively little research has been conducted on the relation between temperature and morbidity. The aim of this study was to characterize the relationship between both hot and cold temperatures and emergency hospital admissions in Brisbane, Australia, and to examine whether the relation varied by age and socioeconomic factors. It aimed to explore lag structures of temperature–morbidity association for respiratory causes, and to estimate the magnitude of emergency hospital admissions for cardiovascular diseases attributable to hot and cold temperatures for the large contribution of both diseases to the total emergency hospital admissions. A time series study design was applied using routinely collected data of daily emergency hospital admissions, weather and air pollution variables in Brisbane during 1996–2005. Poisson regression model with a distributed lag non-linear structure was adopted to assess the impact of temperature on emergency hospital admissions after adjustment for confounding factors. Both hot and cold effects were found, with higher risk of hot temperatures than that of cold temperatures. Increases in mean temperature above 24.2oC were associated with increased morbidity, especially for the elderly ≥ 75 years old with the largest effect. The magnitude of the risk estimates of hot temperature varied by age and socioeconomic factors. High population density, low household income, and unemployment appeared to modify the temperature–morbidity relation. There were different lag structures for hot and cold temperatures, with the acute hot effect within 3 days after hot exposure and about 2-week lagged cold effect on respiratory diseases. A strong harvesting effect after 3 days was evident for respiratory diseases. People suffering from cardiovascular diseases were found to be more vulnerable to hot temperatures than cold temperatures. However, more patients admitted for cardiovascular diseases were attributable to cold temperatures in Brisbane compared with hot temperatures. This study contributes to the knowledge base about the association between temperature and morbidity. It is vitally important in the context of ongoing climate change. The findings of this study may provide useful information for the development and implementation of public health policy and strategic initiatives designed to reduce and prevent the burden of disease due to the impact of climate change.
Resumo:
Monte Carlo simulations were used to investigate the relationship between the morphological characteristics and the diffusion tensor (DT) of partially aligned networks of cylindrical fibres. The orientation distributions of the fibres in each network were approximately uniform within a cone of a given semi-angle (θ0). This semi-angle was used to control the degree of alignment of the fibres. The networks studied ranged from perfectly aligned (θ0 = 0) to completely disordered (θ0 = 90°). Our results are qualitatively consistent with previous numerical models in the overall behaviour of the DT. However, we report a non-linear relationship between the fractional anisotropy (FA) of the DT and collagen volume fraction, which is different to the findings from previous work. We discuss our results in the context of diffusion tensor imaging of articular cartilage. We also demonstrate how appropriate diffusion models have the potential to enable quantitative interpretation of the experimentally measured diffusion-tensor FA in terms of collagen fibre alignment distributions.
Resumo:
Background The association between temperature and mortality has been examined mainly in North America and Europe. However, less evidence is available in developing countries, especially in Thailand. In this study, we examined the relationship between temperature and mortality in Chiang Mai city, Thailand, during 1999–2008. Method A time series model was used to examine the effects of temperature on cause-specific mortality (non-external, cardiopulmonary, cardiovascular, and respiratory) and age-specific non-external mortality (<=64, 65–74, 75–84, and > =85 years), while controlling for relative humidity, air pollution, day of the week, season and long-term trend. We used a distributed lag non-linear model to examine the delayed effects of temperature on mortality up to 21 days. Results We found non-linear effects of temperature on all mortality types and age groups. Both hot and cold temperatures resulted in immediate increase in all mortality types and age groups. Generally, the hot effects on all mortality types and age groups were short-term, while the cold effects lasted longer. The relative risk of non-external mortality associated with cold temperature (19.35°C, 1st percentile of temperature) relative to 24.7°C (25th percentile of temperature) was 1.29 (95% confidence interval (CI): 1.16, 1.44) for lags 0–21. The relative risk of non-external mortality associated with high temperature (31.7°C, 99th percentile of temperature) relative to 28°C (75th percentile of temperature) was 1.11 (95% CI: 1.00, 1.24) for lags 0–21. Conclusion This study indicates that exposure to both hot and cold temperatures were related to increased mortality. Both cold and hot effects occurred immediately but cold effects lasted longer than hot effects. This study provides useful data for policy makers to better prepare local responses to manage the impact of hot and cold temperatures on population health.
Resumo:
The hollow flange beam (HFB) is a unique cold-formed steel section developed in Australia for use as a flexural member. Research has identified that the HFB section's flexural capacity for intermediate span members is limited by lateral distortional buckling, which is characterized by simultaneous lateral deflection, twist, and web distortion. This buckling behaviour is mainly due to the unique geometry of the section, comprising two torsionally stiff triangular flanges connected by a slender web. This paper presents a finite element analytical model suitable for non-linear analysis of HFB flexural members. The model includes all significant effects that may influence the ultimate capacity of such members, including material inelasticity, local buckling, member instability, web distortion, residual stresses, and geometric imperfections. It was found to accurately predict both the elastic lateral distortional buckling moments and the ultimate capacities of HFB flexural members, and was therefore used in the development of design curves and suitable design procedures.
Resumo:
The continuous growth of the XML data poses a great concern in the area of XML data management. The need for processing large amounts of XML data brings complications to many applications, such as information retrieval, data integration and many others. One way of simplifying this problem is to break the massive amount of data into smaller groups by application of clustering techniques. However, XML clustering is an intricate task that may involve the processing of both the structure and the content of XML data in order to identify similar XML data. This research presents four clustering methods, two methods utilizing the structure of XML documents and the other two utilizing both the structure and the content. The two structural clustering methods have different data models. One is based on a path model and other is based on a tree model. These methods employ rigid similarity measures which aim to identifying corresponding elements between documents with different or similar underlying structure. The two clustering methods that utilize both the structural and content information vary in terms of how the structure and content similarity are combined. One clustering method calculates the document similarity by using a linear weighting combination strategy of structure and content similarities. The content similarity in this clustering method is based on a semantic kernel. The other method calculates the distance between documents by a non-linear combination of the structure and content of XML documents using a semantic kernel. Empirical analysis shows that the structure-only clustering method based on the tree model is more scalable than the structure-only clustering method based on the path model as the tree similarity measure for the tree model does not need to visit the parents of an element many times. Experimental results also show that the clustering methods perform better with the inclusion of the content information on most test document collections. To further the research, the structural clustering method based on tree model is extended and employed in XML transformation. The results from the experiments show that the proposed transformation process is faster than the traditional transformation system that translates and converts the source XML documents sequentially. Also, the schema matching process of XML transformation produces a better matching result in a shorter time.
Resumo:
Non-linear feedback shift register (NLFSR) ciphers are cryptographic tools of choice of the industry especially for mobile communication. Their attractive feature is a high efficiency when implemented in hardware or software. However, the main problem of NLFSR ciphers is that their security is still not well investigated. The paper makes a progress in the study of the security of NLFSR ciphers. In particular, we show a distinguishing attack on linearly filtered NLFSR (or LF-NLFSR) ciphers. We extend the attack to a linear combination of LF-NLFSRs. We investigate the security of a modified version of the Grain stream cipher and show its vulnerability to both key recovery and distinguishing attacks.