969 resultados para technology standard
Resumo:
Abstract]: Traditional technology adoption models identified ‘ease of use’ and ‘usefulness’ as the dominating factors for technology adoption. However, recent studies in healthcare have established that these two factors are not always reliable on their own and other factors may influence technology adoption. To establish the identity of these additional factors, a mixed method approach was used and data were collected through interviews and a survey. The survey instrument was specifically developed for this study so that it is relevant to the Indian healthcare setting. We identified clinical management and technological barriers as the dominant factors influencing the wireless handheld technology adoption in the Indian healthcare environment. The results of this study showed that new technology models will benefit by considering the clinical influences of wireless handheld technology, in addition to known factors. The scope of this study is restricted to wireless handheld devices such as PDAs, smart phones, and handheld PCs Gururajan, Raj and Hafeez-Baig, Abdul and Gururajan, Vijaya
Resumo:
In the multi-view approach to semisupervised learning, we choose one predictor from each of multiple hypothesis classes, and we co-regularize our choices by penalizing disagreement among the predictors on the unlabeled data. We examine the co-regularization method used in the co-regularized least squares (CoRLS) algorithm, in which the views are reproducing kernel Hilbert spaces (RKHS's), and the disagreement penalty is the average squared difference in predictions. The final predictor is the pointwise average of the predictors from each view. We call the set of predictors that can result from this procedure the co-regularized hypothesis class. Our main result is a tight bound on the Rademacher complexity of the co-regularized hypothesis class in terms of the kernel matrices of each RKHS. We find that the co-regularization reduces the Rademacher complexity by an amount that depends on the distance between the two views, as measured by a data dependent metric. We then use standard techniques to bound the gap between training error and test error for the CoRLS algorithm. Experimentally, we find that the amount of reduction in complexity introduced by co regularization correlates with the amount of improvement that co-regularization gives in the CoRLS algorithm.
Resumo:
An experimental laboratory investigation was carried out to assess the structural adequacy of a disused PHO Class Flat Bottom Rail Wagon (FRW) for a single lane low volume road bridge application as per the design provisions of the Australian Bridge Design Standard AS 5100(2004). The investigation also encompassed a review into the risk associated with the pre-existing damage in wagons incurred during their service life on rail. The main objective of the laboratory testing of the FRW was to physically measure its performance under the same applied traffic loading it would be required to resist as a road bridge deck. In order to achieve this a full width (5.2m) single lane, single span (approximately 10m), simply supported bridge would be required to be constructed and tested in a structural laboratory. However, the available clear spacing between the columns of the loading portal frame encountered within the laboratory was insufficient to accommodate the 5.2m wide bridge deck excluding clearance normally considered necessary in structural testing. Therefore, only half of the full scale bridge deck (single FRW of width 2.6m) was able to be accommodated and tested; with the continuity of the bridge deck in the lateral direction applied as boundary constraints along the full length of the FRW at six selected locations. This represents a novel approach not yet reported in the literature for bridge deck testing to the best of the knowledge of the author. The test was carried out under two loadings provided in AS 5100 (2004) – one stationary W80 wheel load and the second a moving axle load M1600. As the bridge investigated in the study is a single lane single span low volume road bridge, the risk of pre-existing damage and the expected high cycle fatigue failure potential was assessed as being minimal and hence the bridge deck was not tested structurally for fatigue/ fracture. The high axle load requirements have instead been focussed upon the investigation into the serviceability and ultimate limit state requirements. The testing regime adopted however involved extensive recording of strains and deflections at several critical locations of the FRW. Three locations of W80 point load and two locations of the M1600 Axle load were considered for the serviceability testing; the FRW was also tested under the ultimate load dictated by the M1600. The outcomes of the experimental investigation have demonstrated that the FRW is structurally adequate to resist the prescribed traffic loadings outlaid in AS 5100 (2004). As the loading was directly applied on to the FRW, the laboratory testing is assessed as being significantly conservative. The FRW bridge deck in the field would only resist the load transferred by the running platform, where, depending on the design, composite action might exist – thereby the share of the loading which needs to be resisted by the FRW would be smaller than the system tested in the lab. On this basis, a demonstration bridge is under construction at the time of writing this thesis and future research will involve field testing in order to assess its performance.
Resumo:
All organisations, irrespective of size and type, need effective information security management (ISM) practices to protect vital organisational in- formation assets. However, little is known about the information security management practices of nonprofit organisations. Australian nonprofit organisations (NPOs) employed 889,900 people, managed 4.6 million volunteers and contributed $40,959 million to the economy during 2006-2007 (Australian Bureau of Statistics, 2009). This thesis describes the perceptions of information security management in two Australian NPOs and examines the appropriateness of the ISO 27002 information security management standard in an NPO context. The overall approach to the research is interpretive. A collective case study has been performed, consisting of two instrumental case studies with the researcher being embedded within two NPOs for extended periods of time. Data gathering and analysis was informed by grounded theory and action research, and the Technology Acceptance Model was utilised as a lens to explore the findings and provide limited generalisability to other contexts. The major findings include a distinct lack of information security management best practice in both organisations. ISM Governance and risk management was lacking and ISM policy was either outdated or non- existent. While some user focused ISM practices were evident, reference to standards, such as ISO 27002, were absent. The main factor that negatively impacted on ISM practices was the lack of resources available for ISM in the NPOs studied. Two novel aspects of information security dis- covered in this research were the importance of accuracy and consistency of information. The contribution of this research is a preliminary understanding of ISM practices and perceptions in NPOs. Recommendations for a new approach to managing information security management in nonprofit organisations have been proposed.
Resumo:
As the Service-oriented architecture paradigm has become ever more popular, different standardization efforts have been proposed by various consortia to enable interaction among heterogeneous environments through this paradigm. This chapter will overview the most prevalent of these SOA Efforts. It will first show how technical services can be described, how they can interact with each other and be discovered by users. Next, the chapter will present different standards to facilitate service composition and to design service-oriented environments in light of a universal understanding of service orientation. The chapter will conclude with a summary and a discussion on the limitations of the reviewed standards along their ability to describe service properties. This paves the way to the next chapters where the USDL standard will be presented, which aim to lift such limitations.
Resumo:
The pervasiveness of technology in the 21st Century has meant that adults and children live in a society where digital devices are integral to their everyday lives and participation in society. How we communicate, learn, work, entertain ourselves, and even shop is influenced by technology. Therefore, before children begin school they are potentially exposed to a range of learning opportunities mediated by digital devices. These devices include microwaves, mobile phones, computers, and console games such as Playstations® and iPods®. In Queensland preparatory classrooms and in the homes of these children, teachers and parents support and scaffold young children’s experiences, providing them with access to a range of tools that promote learning and provide entertainment. This paper examines teachers’ and parents’ perspectives and considers whether they are techno-optimists who advocate for and promote the inclusion of digital technology, or whether they are they techno-pessimists, who prefer to exclude digital devices from young children’s everyday experiences. An exploratory, single case study design was utilised to gather data from three teachers and ten parents of children in the preparatory year. Teacher data was collected through interviews and email correspondence. Parent data was collected from questionnaires and focus groups. All parents who responded to the research invitation were mothers. The results of data analysis identified a misalignment among adults’ perspectives. Teachers were identified as techno-optimists and parents were identified as techno-pessimists with further emergent themes particular to each category being established. This is concerning because both teachers and mothers influence young children’s experiences and numeracy knowledge, thus, a shared understanding and a common commitment to supporting young children’s use of technology would be beneficial. Further research must investigate fathers’ perspectives of digital devices and the beneficial and detrimental roles that a range of digital devices, tools, and entertainment gadgets play in 21st Century children’s lives.
Resumo:
Some initial EUVL patterning results for polycarbonate based non-chemically amplified resists are presented. Without full optimization the developer a resolution of 60 nm line spaces could be obtained. With slight overexposure (1.4 × E0) 43.5 nm lines at a half pitch of 50 nm could be printed. At 2x E0 a 28.6 nm lines at a half pitch of 50 nm could be obtained with a LER that was just above expected for mask roughness. Upon being irradiated with EUV photons, these polymers undergo chain scission with the loss of carbon dioxide and carbon monoxide. The remaining photoproducts appear to be non-volatile under standard EUV irradiation conditions, but do exhibit increased solubility in developer compared to the unirradiated polymer. The sensitivity of the polymers to EUV light is related to their oxygen content and ways to increase the sensitivity of the polymers to 10 mJ cm-2 is discussed.
Resumo:
The mineral crandallite CaAl3(PO4)2(OH)5•(H2O) has been identified in deposits found in the Jenolan Caves, New South Wales, Australia by using a combination of X-ray diffraction and Raman spectroscopic techniques. A comparison is made between the vibrational spectra of crandallite found in the Jenolan Caves and a standard crandallite. Raman and infrared bands are assigned to PO43- and HPO42- stretching and bending modes. The predominant features are the internal vibrations of the PO43 and HPO42- groups. A mechanism for the formation of crandallite is presented and the conditions for the formation are elucidated.
Resumo:
This thesis investigates profiling and differentiating customers through the use of statistical data mining techniques. The business application of our work centres on examining individuals’ seldomly studied yet critical consumption behaviour over an extensive time period within the context of the wireless telecommunication industry; consumption behaviour (as oppose to purchasing behaviour) is behaviour that has been performed so frequently that it become habitual and involves minimal intentions or decision making. Key variables investigated are the activity initialised timestamp and cell tower location as well as the activity type and usage quantity (e.g., voice call with duration in seconds); and the research focuses are on customers’ spatial and temporal usage behaviour. The main methodological emphasis is on the development of clustering models based on Gaussian mixture models (GMMs) which are fitted with the use of the recently developed variational Bayesian (VB) method. VB is an efficient deterministic alternative to the popular but computationally demandingMarkov chainMonte Carlo (MCMC) methods. The standard VBGMMalgorithm is extended by allowing component splitting such that it is robust to initial parameter choices and can automatically and efficiently determine the number of components. The new algorithm we propose allows more effective modelling of individuals’ highly heterogeneous and spiky spatial usage behaviour, or more generally human mobility patterns; the term spiky describes data patterns with large areas of low probability mixed with small areas of high probability. Customers are then characterised and segmented based on the fitted GMM which corresponds to how each of them uses the products/services spatially in their daily lives; this is essentially their likely lifestyle and occupational traits. Other significant research contributions include fitting GMMs using VB to circular data i.e., the temporal usage behaviour, and developing clustering algorithms suitable for high dimensional data based on the use of VB-GMM.
Resumo:
As family history has been established as a risk factor for prostate cancer, attempts have been made to isolate predisposing genetic variants that are related to hereditary prostate cancer. With many genetic variants still to be identified and investigated, it is not yet possible to fully understand the impact of genetic variants on prostate cancer development. The high survival rates among men with prostate cancer have meant that other issues, such as quality of life (QoL), have also become important. Through their effect on a person’s health, a range of inherited genetic variants may potentially influence QoL in men with prostate cancer, even prior to treatment. Until now, limited research has been conducted on the relationship between genetics and QoL. Thus, this study contributes to an emerging field by aiming to identify certain genetic variants related to the QoL found in men with prostate cancer. It is hoped that this study may lead to future research that will identify men who have an increased risk of a poor QoL following prostate cancer treatment, which will aid in developing treatments that are individually tailored to support them. Previous studies have established that genetic variants of Vascular Endothelial Growth Factor (VEGF) and Insulin-like Growth Factor 1 (IGF-1) may play a role in prostate cancer development. VEGF and IGF-1 have also been reported to be associated with QoL in people with ovarian cancer and colorectal cancer, respectively. This study completed a series of secondary analyses using two major data-sets (from 850 men newly diagnosed with prostate cancer, and approximately 550 men from the general Queensland population), in which genetic variants of VEGF and IGF-1 were investigated for associations with prostate cancer susceptibility and QoL. The first aim of this research was to investigate genetic variants in the VEGF and IGF-I gene for an association with the risk of prostate cancer. It was found that one IGF-1 genetic variant (rs35765) had a statistically significant association with prostate cancer (p = 0.04), and one VEGF genetic variant (rs2146323) had a statistically significant association with advanced prostate cancer (p = 0.02). The estimates suggest that carriers of the CA and AA genotype for rs35765 may have a reduced risk of developing prostate cancer (Odds Ratio (OR) = 0.72, 95% Confidence Interval (CI) = 0.55, 0.95, OR = 0.60, 95% CI = 0.26, 1.39, respectively). Meanwhile, carriers of the CA and AA genotype for rs2146323 may be at increased risk of advanced prostate cancer, which was determined by a Gleason score of above 7 (OR = 1.72, 95% CI = 1.12, 2.63, OR = 1.90, 95% CI = 1.08, 3.34, respectively). Utilising the widely used short-form health survey, the SF-36v2, the second aim of this study was to investigate the relationship between prostate cancer and QoL prior to treatment. Assessing QoL at this time-point was important as little research has been conducted to evaluate if prostate cancer affects QoL regardless of treatment. The analyses found that mean SF-36v2 scale scores related to physical health were higher by at least 0.3 Standard Deviations (SD) among men with prostate cancer than the general population comparison group. This difference was considered clinically significant (defined by group differences in mean SF-36v2 scores by at least 0.3 SD). These differences were also statistically significant (p<0.05). Mean QoL scale scores related to mental health were similar between men with prostate cancer and those from the general population comparison group. The third aim of this study was to investigate genetic variants in the VEGF and IGF-1 gene for an association with QoL in prostate cancer patients prior to their treatment. It was essential to evaluate these relationships prior to treatment, before the involvement of these genes was potentially interrupted by treatment. The analyses found that some genetic variants had a small clinically significant association (0.3 SD) to some QoL domains experienced by these men. However, most relationships were not statistically significant (p>0.05). Most of the associations found identified that a small sub-group of men with prostate cancer (approximately 2%) reported, on average, a slightly better QoL than the majority of the prostate cancer patients. The fourth aim of this research was to investigate whether associations between genetic variants in VEGF and IGF-1 and QoL were specific to men with prostate cancer, or were also applicable to the general male population. It was found that twenty out of one-hundred relationships between the genetic variants of VEGF and IGF-1 and QoL health-measures and scales examined differed between these groups. In the majority of the relationships involving VEGF SNPs that differed, a clinically significant difference (0.3 or more SD) between mean scores among the genotype groups in prostate cancer patients was found, while mean scores among men from the general-population comparison group were similar. For example, prostate cancer participants who carried at least one T allele (CT or TT genotype) for rs3024994 had a clinically significant higher (0.3 SD) mean QoL score in terms of the role-physical scale, than participants who carried the CC genotype. This was not seen among men from the general population sample, as the mean score was similar between genotype groups. The opposite was seen in regards to the IGF-1 SNPs examined. Overall, these relationships were not considered to directly impact on the clinical options for men with prostate cancer. As this study utilised secondary data from two separate studies, there are a number of important limitations that should be acknowledged including issues of multiple comparisons, power, and missing or unavailable data. It is recommended that this study be replicated as a better-designed study that takes greater consideration of the many factors involved in prostate cancer and QoL. Investigation into other genetic variants of VEGF or IGF-1 is also warranted, as is consideration of other genes and their relationship with QoL. Through identifying certain genetic variants that have a modest association to prostate cancer, this project adds to the knowledge surrounding VEGF and IGF-1 and their role in prostate cancer susceptibility. Importantly, this project has also introduced the potential role genetics plays in QoL, through investigating the relationships between genetic variants of VEGF and IGF-1 and QoL.
Resumo:
Diabetes is an increasingly prevalent disease worldwide. Providing early management of the complications can prevent morbidity and mortality in this population. Peripheral neuropathy, a significant complication of diabetes, is the major cause of foot ulceration and amputation in diabetes. Delay in attending to complication of the disease contributes to significant medical expenses for diabetic patients and the community. Early structural changes to the neural components of the retina have been demonstrated to occur prior to the clinically visible retinal vasculature complication of diabetic retinopathy. Additionally visual functionloss has been shown to exist before the ophthalmoscopic manifestations of vasculature damage. The purpose of this thesis was to evaluate the relationship between diabetic peripheral neuropathy and both retinal structure and visual function. The key question was whether diabetic peripheral neuropathy is the potential underlying factor responsible for retinal anatomical change and visual functional loss in people with diabetes. This study was conducted on a cohort with type 2 diabetes. Retinal nerve fibre layer thickness was assessed by means of Optical Coherence Tomography (OCT). Visual function was assessed using two different methods; Standard Automated Perimetry (SAP) and flicker perimetry were performed within the central 30 degrees of fixation. The level of diabetic peripheral neuropathy (DPN) was assessed using two techniques - Quantitative Sensory Testing and Neuropathy Disability Score (NDS). These techniques are known to be capable of detecting DPN at very early stages. NDS has also been shown as a gold standard for detecting 'risk of foot ulceration'. Findings reported in this thesis showed that RNFL thickness, particularly in the inferior quadrant, has a significant association with severity of DPN when the condition has been assessed using NDS. More specifically it was observed that inferior RNFL thickness has the ability to differentiate individuals who are at higher risk of foot ulceration from those who are at lower risk, indicating that RNFL thickness can predict late-staged DPN. Investigating the association between RNFL and QST did not show any meaningful interaction, which indicates that RNFL thickness for this cohort was not as predictive of neuropathy status as NDS. In both of these studies, control participants did not have different results from the type 2 cohort who did not DPN suggesting that RNFL thickness is not a marker for diagnosing DPN at early stages. The latter finding also indicated that diabetes per se, is unlikely to affect the RNFL thickness. Visual function as measured by SAP and flicker perimetry was found to be associated with severity of peripheral neuropathy as measured by NDS. These findings were also capable of differentiating individuals at higher risk of foot ulceration; however, visual function also proved not to be a maker for early diagnosis of DPN. It was found that neither SAP, nor flicker sensitivity have meaningful associations with DPN when neuropathy status was measured using QST. Importantly diabetic retinopathy did not explain any of the findings in these experiments. The work described here is valuable as no other research to date has investigated the association between diabetic peripheral neuropathy and either retinal structure or visual function.
Resumo:
Background: Bioimpedance techniques provide a reliable method of assessing unilateral lymphedema in a clinical setting. Bioimpedance devices are traditionally used to assess body composition at a current frequency of 50 kHz. However, these devices are not transferable to the assessment of lymphedema, as the sensitivity of measuring the impedance of extracellular fluid is frequency dependent. It has previously been shown that the best frequency to detect extracellular fluid is 0 kHz (or DC). However, measurement at this frequency is not possible in practice due to the high skin impedance at DC, and an estimate is usually determined from low frequency measurements. This study investigated the efficacy of various low frequency ranges for the detection of lymphedema. Methods and Results: Limb impedance was measured at 256 frequencies between 3 kHz and 1000 kHz for a sample control population, arm lymphedema population, and leg lymphedema population. Limb impedance was measured using the ImpediMed SFB7 and ImpediMed L-Dex® U400 with equipotential electrode placement on the wrists and ankles. The contralateral limb impedance ratio for arms and legs was used to calculate a lymphedema index (L-Dex) at each measurement frequency. The standard deviation of the limb impedance ratio in a healthy control population has been shown to increase with frequency for both the arm and leg. Box and whisker plots of the spread of the control and lymphedema populations show that there exists good differentiation between the arm and leg L-Dex measured for lymphedema subjects and the arm and leg L-Dex measured for control subjects up to a frequency of about 30 kHz. Conclusions: It can be concluded that impedance measurements above a frequency of 30 kHz decrease sensitivity to extracellular fluid and are not reliable for early detection of lymphedema.
Resumo:
Knowledge base is one of the emerging concepts in the Knowledge Management area. As there exists no agreed- upon standard definition of a knowledge base, this paper defines a knowledge base in terms of our research of Enterprise Systems (ES). The knowledge base is defined with reference to Learning Network Theory. Using this theoretical framework, we investigate the roles of management and operational staff in organisations and how their interactions can create a better ES-knowledge base to contribute to ES success. We focus on the post- implementation phase of ES as part of the ES lifecycle. Our findings will facilitate future research directions and contribute to better understandings of how the knowledge base can be integrated and how this integration leads to Enterprise System success.
Resumo:
We consider time-space fractional reaction diffusion equations in two dimensions. This equation is obtained from the standard reaction diffusion equation by replacing the first order time derivative with the Caputo fractional derivative, and the second order space derivatives with the fractional Laplacian. Using the matrix transfer technique proposed by Ilic, Liu, Turner and Anh [Fract. Calc. Appl. Anal., 9:333--349, 2006] and the numerical solution strategy used by Yang, Turner, Liu, and Ilic [SIAM J. Scientific Computing, 33:1159--1180, 2011], the solution of the time-space fractional reaction diffusion equations in two dimensions can be written in terms of a matrix function vector product $f(A)b$ at each time step, where $A$ is an approximate matrix representation of the standard Laplacian. We use the finite volume method over unstructured triangular meshes to generate the matrix $A$, which is therefore non-symmetric. However, the standard Lanczos method for approximating $f(A)b$ requires that $A$ is symmetric. We propose a simple and novel transformation in which the standard Lanczos method is still applicable to find $f(A)b$, despite the loss of symmetry. Numerical results are presented to verify the accuracy and efficiency of our newly proposed numerical solution strategy.
Resumo:
17.1 Up until the 1990s the methods used to teach the law had evolved little since the first law schools were established in Australia. As Keyes and Johnstone observed: In the traditional model, most teachers uncritically replicate the learning experiences that they had when students, which usually means that the dominant mode of instruction is reading lecture notes to large classes in which students are largely passive. Traditional legal education has been described in the following terms: Traditionally law is taught through a series of lectures, with little or no student involvement, and a tutorial programme. Sometimes tutorials are referred to as seminars but the terminology used is often insignificant: both terms refer to probably the only form of student participation that takes place throughout these students‘ academic legal education. The tutorial consists of analysing the answers, prepared in advanced (sic), to artificial Janet and John Doe problems or esoteric essay questions. The primary focus of traditional legal education is the transmission of content knowledge, more particularly the teaching of legal rules, especially those drawn from case law. This approach has a long pedigree. Writing in 1883, Dicey proposed that nothing can be taught to students of greater value, either intellectually or for the purposes of legal practice, than the habit of looking on the law as a series of rules‘.