902 resultados para non conventional secretion
Resumo:
Education in the 21st century demands a model for understanding a new culture of learning in the face of rapid change, open access data and geographical diversity. Teachers no longer need to provide the latest information because students themselves are taking an active role in peer collectives to help create it. This paper examines, through an Australian case study entitled ‘Design Minds’, the development of an online design education platform as a key initiative to enact a government priority for state-wide cultural change through design-based curriculum. Utilising digital technology to create a supportive community, ‘Design Minds’ recognises that interdisciplinary learning fostered through engagement will empower future citizens to think, innovate, and discover. This paper details the participatory design process undertaken with multiple stakeholders to create the platform. It also outlines a proposed research agenda for future measurement of its value in creating a new learning culture, supporting regional and remote communities, and revitalising frontline services. It is anticipated this research will inform ongoing development of the online platform, and future design education and research programs in K-12 schools in Australia.
Resumo:
Background: Ultraviolet radiation exposure during an individuals' lifetime is a known risk factor for the development of skin cancer. However, less evidence is available on assessing the relationship between lifetime sun exposure and skin damage and skin aging. Objectives: This study aims to assess the relationship between lifetime sun exposure and skin damage and skin aging using a non-invasive measure of exposure. Methods: We recruited 180 participants (73 males, 107 females) aged 18-83 years. Digital imaging of skin hyper-pigmentation (skin damage) and skin wrinkling (skin aging) on the facial region was measured. Lifetime sun exposure (presented as hours) was calculated from the participants' age multiplied by the estimated annual time outdoors for each year of life. We analyzed the effects of lifetime sun exposure on skin damage and skin aging. We adjust for the influence of age, sex, occupation, history of skin cancer, eye color, hair color, and skin color. Results: There were non-linear relationships between lifetime sun exposure and skin damage and skin aging. Younger participant's skin is much more sensitive to sun exposure than those who were over 50 years of age. As such, there were negative interactions between lifetime sun exposure and age. Age had linear effects on skin damage and skin aging. Conclusion: The data presented showed that self reported lifetime sun exposure was positively associated with skin damage and skin aging, in particular, the younger people. Future health promotion for sun exposure needs to pay attention to this group for skin cancer prevention messaging. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
In recent years considerable attention has been paid to the numerical solution of stochastic ordinary differential equations (SODEs), as SODEs are often more appropriate than their deterministic counterparts in many modelling situations. However, unlike the deterministic case numerical methods for SODEs are considerably less sophisticated due to the difficulty in representing the (possibly large number of) random variable approximations to the stochastic integrals. Although Burrage and Burrage [High strong order explicit Runge-Kutta methods for stochastic ordinary differential equations, Applied Numerical Mathematics 22 (1996) 81-101] were able to construct strong local order 1.5 stochastic Runge-Kutta methods for certain cases, it is known that all extant stochastic Runge-Kutta methods suffer an order reduction down to strong order 0.5 if there is non-commutativity between the functions associated with the multiple Wiener processes. This order reduction down to that of the Euler-Maruyama method imposes severe difficulties in obtaining meaningful solutions in a reasonable time frame and this paper attempts to circumvent these difficulties by some new techniques. An additional difficulty in solving SODEs arises even in the Linear case since it is not possible to write the solution analytically in terms of matrix exponentials unless there is a commutativity property between the functions associated with the multiple Wiener processes. Thus in this present paper first the work of Magnus [On the exponential solution of differential equations for a linear operator, Communications on Pure and Applied Mathematics 7 (1954) 649-673] (applied to deterministic non-commutative Linear problems) will be applied to non-commutative linear SODEs and methods of strong order 1.5 for arbitrary, linear, non-commutative SODE systems will be constructed - hence giving an accurate approximation to the general linear problem. Secondly, for general nonlinear non-commutative systems with an arbitrary number (d) of Wiener processes it is shown that strong local order I Runge-Kutta methods with d + 1 stages can be constructed by evaluated a set of Lie brackets as well as the standard function evaluations. A method is then constructed which can be efficiently implemented in a parallel environment for this arbitrary number of Wiener processes. Finally some numerical results are presented which illustrate the efficacy of these approaches. (C) 1999 Elsevier Science B.V. All rights reserved.
Resumo:
In many modeling situations in which parameter values can only be estimated or are subject to noise, the appropriate mathematical representation is a stochastic ordinary differential equation (SODE). However, unlike the deterministic case in which there are suites of sophisticated numerical methods, numerical methods for SODEs are much less sophisticated. Until a recent paper by K. Burrage and P.M. Burrage (1996), the highest strong order of a stochastic Runge-Kutta method was one. But K. Burrage and P.M. Burrage (1996) showed that by including additional random variable terms representing approximations to the higher order Stratonovich (or Ito) integrals, higher order methods could be constructed. However, this analysis applied only to the one Wiener process case. In this paper, it will be shown that in the multiple Wiener process case all known stochastic Runge-Kutta methods can suffer a severe order reduction if there is non-commutativity between the functions associated with the Wiener processes. Importantly, however, it is also suggested how this order can be repaired if certain commutator operators are included in the Runge-Kutta formulation. (C) 1998 Elsevier Science B.V. and IMACS. All rights reserved.
Resumo:
Series reactors are used in distribution grids to reduce the short-circuit fault level. Some of the disadvantages of the application of these devices are the voltage drop produced across the reactor and the steep front rise of the transient recovery voltage (TRV), which generally exceeds the rating of the associated circuit breaker. Simulations were performed to compare the characteristics of a saturated core High-Temperature Superconducting Fault Current Limiter (HTS FCL) and a series reactor. The design of the HTS FCL was optimized using the evolutionary algorithm. The resulting Pareto frontier curve of optimum solution is presented in this paper. The results show that the steady-state impedance of an HTS FCL is significantly lower than that of a series reactor for the same level of fault current limiting. Tests performed on a prototype 11 kV HTS FCL confirm the theoretical results. The respective transient recovery voltages (TRV) of the HTS FCL and an air core reactor of comparable fault current limiting capability are also determined. The results show that the saturated core HTS FCL has a significantly lower effect on the rate of rise of the circuit breaker TRV as compared to the air core reactor. The simulations results are validated with shortcircuit test results.
Resumo:
Internet services are important part of daily activities for most of us. These services come with sophisticated authentication requirements which may not be handled by average Internet users. The management of secure passwords for example creates an extra overhead which is often neglected due to usability reasons. Furthermore, password-based approaches are applicable only for initial logins and do not protect against unlocked workstation attacks. In this paper, we provide a non-intrusive identity verification scheme based on behavior biometrics where keystroke dynamics based-on free-text is used continuously for verifying the identity of a user in real-time. We improved existing keystroke dynamics based verification schemes in four aspects. First, we improve the scalability where we use a constant number of users instead of whole user space to verify the identity of target user. Second, we provide an adaptive user model which enables our solution to take the change of user behavior into consideration in verification decision. Next, we identify a new distance measure which enables us to verify identity of a user with shorter text. Fourth, we decrease the number of false results. Our solution is evaluated on a data set which we have collected from users while they were interacting with their mail-boxes during their daily activities.
Resumo:
There are different ways to authenticate humans, which is an essential prerequisite for access control. The authentication process can be subdivided into three categories that rely on something someone i) knows (e.g. password), and/or ii) has (e.g. smart card), and/or iii) is (biometric features). Besides classical attacks on password solutions and the risk that identity-related objects can be stolen, traditional biometric solutions have their own disadvantages such as the requirement of expensive devices, risk of stolen bio-templates etc. Moreover, existing approaches provide the authentication process usually performed only once initially. Non-intrusive and continuous monitoring of user activities emerges as promising solution in hardening authentication process: iii-2) how so. behaves. In recent years various keystroke dynamic behavior-based approaches were published that are able to authenticate humans based on their typing behavior. The majority focuses on so-called static text approaches, where users are requested to type a previously defined text. Relatively few techniques are based on free text approaches that allow a transparent monitoring of user activities and provide continuous verification. Unfortunately only few solutions are deployable in application environments under realistic conditions. Unsolved problems are for instance scalability problems, high response times and error rates. The aim of this work is the development of behavioral-based verification solutions. Our main requirement is to deploy these solutions under realistic conditions within existing environments in order to enable a transparent and free text based continuous verification of active users with low error rates and response times.
Resumo:
This paper examines the relationship between financial performance and ethical screening intensity of a special class of ethical funds that is rooted in Islamic values – Islamic equity funds (IEFs). These faith-based ethical funds screen investments on compliance with Islamic values where conventional interest expense (riba), gambling (maysir), excessive uncertainty (gharar), and non-ethical (non-halal) products are prohibited. We test whether these extra screens affect the financial performance of IEFs relative to non-Islamic funds. Based on a large survivorship-free international sample of 387 Islamic funds, our results show that IEFs on average underperform conventional funds by 40 basis points per month, or 4.8% per year (supporting the underperformance hypothesis). While Islamic funds do not generally perform better during crisis periods, they outperformed conventional funds during the recent sub-prime crisis (supporting the outperformance hypothesis). Using holdings-based measures for ethical screening intensity, results show IEFs that apply more intensive screening perform worse, suggesting that there is a cost to being ethical.
Resumo:
Past work has clearly demonstrated that numerous commonly used metallic materials will support burning in oxygen, especially at higher pressures. An approach to rectify this significant safety problem has been successfully developed and implemented by applying the concept of Situational Non-Flammability. This approach essentially removes or breaks one leg of the conceptual fire triangle, a tool commonly used to define the three things that are required to support burning; a fuel, an ignition source and an oxidizer. Since an oxidiser is always present in an oxygen system as are ignition sources, the concept of Situational Non-Flammability essentially removes the fuel leg of the fire triangle by only utilising materials that will not burn at the maximum pressure, for example, that the control valve is to be used in. The utilisation of this approach has lead to the development of a range of oxygen components that are practically unable to burn while in service at their design pressure thus providing an unparalleled level of first safety while not compromising on the performance or endurance required in the function of these components. This paper describes the concept of Situational Non-Flammability, how it was used to theoretically evaluate designs of components for oxygen service and the outcomes of the actual development, fabrication and finally utilisation of these components in real oxygen systems in a range of flow control devices.
Resumo:
An analytical method for the detection of carbonaceous gases by a non-dispersive infrared sensor (NDIR) has been developed. The calibration plots of six carbonaceous gases including CO2, CH4, CO, C2H2, C2H4 and C2H6 were obtained and the reproducibility determined to verify the feasibility of this gas monitoring method. The results prove that squared correlation coefficients for the six gas measurements are greater than 0.999. The reproducibility is excellent, thus indicating that this analytical method is useful to determinate the concentrations of carbonaceous gases.
Resumo:
The decision in the New South Wales Supreme Court in Boyce v McIntyre [2008] NSWSC 1218 involved determination of a number of issues relating to an assessment of costs under the Legal Profession Act 2004 (NSW). The issue of broad significance was whether a non-associated third party payer must pay the fixed fee that was agreed between the law practice and the client. The court found that the client agreement did not form the basis of assessing costs for the non-associated third party payer.
Resumo:
The conventional mechanical properties of articular cartilage, such as compressive stiffness, have been demonstrated to be limited in their capacity to distinguish intact (visually normal) from degraded cartilage samples. In this paper, we explore the correlation between a new mechanical parameter, namely the reswelling of articular cartilage following unloading from a given compressive load, and the near infrared (NIR) spectrum. The capacity to distinguish mechanically intact from proteoglycan-depleted tissue relative to the "reswelling" characteristic was first established, and the result was subsequently correlated with the NIR spectral data of the respective tissue samples. To achieve this, normal intact and enzymatically degraded samples were subjected to both NIR probing and mechanical compression based on a load-unload-reswelling protocol. The parameter δ(r), characteristic of the osmotic "reswelling" of the matrix after unloading to a constant small load in the order of the osmotic pressure of cartilage, was obtained for the different sample types. Multivariate statistics was employed to determine the degree of correlation between δ(r) and the NIR absorption spectrum of relevant specimens using Partial Least Squared (PLS) regression. The results show a strong relationship (R(2)=95.89%, p<0.0001) between the spectral data and δ(r). This correlation of δ(r) with NIR spectral data suggests the potential for determining the reswelling characteristics non-destructively. It was also observed that δ(r) values bear a significant relationship with the cartilage matrix integrity, indicated by its proteoglycan content, and can therefore differentiate between normal and artificially degraded proteoglycan-depleted cartilage samples. It is therefore argued that the reswelling of cartilage, which is both biochemical (osmotic) and mechanical (hydrostatic pressure) in origin, could be a strong candidate for characterizing the tissue, especially in regions surrounding focal cartilage defects in joints.
Resumo:
In Deppro Pty Ltd v Hannah [2008] QSC 193 one of the matters considered by the court related to the requirement in r 243 of the Uniform Civil Procedure Rules 1999 (Qld) that a notice of non-party disclosure must “state the allegation in issue in the pleadings about which the document sought is directly relevant.”The approach adopted by the issuing party in this case of asserting that documents sought by a notice of non-party disclosure are relevant to allegations in numbered paragraphs in pleadings, and serving copies of the pleadings with the notice, is not uncommon in practice. This decision makes it clear that this practice is fraught with danger. In circumstances where it is not apparent that the non-party has been fully apprised of the relevant issues the decision suggests an applicant for non-party disclosure who has not complied with the requirements of s 243 might be required to issue a fresh, fully compliant notice, and to suffer associated costs consequences.
Resumo:
Trajectory basis Non-Rigid Structure From Motion (NRSFM) currently faces two problems: the limit of reconstructability and the need to tune the basis size for different sequences. This paper provides a novel theoretical bound on 3D reconstruction error, arguing that the existing definition of reconstructability is fundamentally flawed in that it fails to consider system condition. This insight motivates a novel strategy whereby the trajectory's response to a set of high-pass filters is minimised. The new approach eliminates the need to tune the basis size and is more efficient for long sequences. Additionally, the truncated DCT basis is shown to have a dual interpretation as a high-pass filter. The success of trajectory filter reconstruction is demonstrated quantitatively on synthetic projections of real motion capture sequences and qualitatively on real image sequences.