556 resultados para Repeated Calls
Resumo:
Production of recycled concrete aggregates (RCA) from construction and demolition (C&D) waste has become popular all over the world since the availability of land spaces are limited to dispose. Therefore it is important to seek alternative applications for RCA. The use of RCA in base and sub-base layers in granular pavement is a viable solution. In mechanistic pavement design, rutting (permanent deformation) is considered as the major failure mechanisms of the pavement. The rutting is the accumulation of permanent deformation of pavement layers caused by the repetitive vehicle load. In Queensland, Australia, it is accepted to have the maximum of 20% of reclaimed asphalt pavement (RAP) in RCA and therefore, it is important to investigate the effect of RAP on the permanent deformation properties of RCA. In this study, a series of repeated load triaxial (RLT) tests were conducted on RCA blended with different percentage of RAP to investigate the permanent deformation and resilient modulus properties of RCA. The vertical deformation and resilient modulus values were used to determine the response of RCA for the cyclic loading under standard pressure and loading conditions.
Resumo:
Background The epidemiology of dengue in the South Pacific has been characterized by transmission of a single dominant serotype for 3–5 years, with subsequent replacement by another serotype. From 2001 to 2008 only DENV-1 was reported in the Pacific. In 2008, DENV-4 emerged and quickly displaced DENV-1 in the Pacific, except in New Caledonia (NC) where DENV-1 and DENV-4 co-circulated in 2008–2009. During 2012–2013, another DENV-1 outbreak occurred in NC, the third DENV-1 outbreak in a decade. Given that dengue is a serotype-specific immunizing infection, the recurrent outbreaks of a single serotype within a 10-year period was unexpected. Findings This study aimed to inform this phenomenon by examining the phylogenetic characteristics of the DENV-1 viruses in NC and other Pacific islands between 2001 and 2013. As a result, we have demonstrated that NC experienced introductions of viruses from both the Pacific (genotype IV) and South-east Asia (genotype I). Moreover, whereas genotype IV and I were co-circulating at the beginning of 2012, we observed that from the second half of 2012, i.e. during the major DENV-1 outbreak, all analyzed viruses were genotype I suggesting that a genotype switch occurred. Conclusions Repeated outbreaks of the same dengue serotype, as observed in NC, is uncommon in the Pacific islands. Why the earlier DENV-1 outbreaks did not induce sufficient herd immunity is unclear, and likely multifactorial, but the robust vector control program may have played a role by limiting transmission and thus maintaining a large susceptible pool in the population. Keywords: Dengue; Phylogeny; Genotype; Epidemics; New Caledonia
Resumo:
In the current market, extensive software development is taking place and the software industry is thriving. Major software giants have stated source code theft as a major threat to revenues. By inserting an identity-establishing watermark in the source code, a company can prove it's ownership over the source code. In this paper, we propose a watermarking scheme for C/C++ source codes by exploiting the language restrictions. If a function calls another function, the latter needs to be defined in the code before the former, unless one uses function pre-declarations. We embed the watermark in the code by imposing an ordering on the mutually independent functions by introducing bogus dependency. Removal of dependency by the attacker to erase the watermark requires extensive manual intervention thereby making the attack infeasible. The scheme is also secure against subtractive and additive attacks. Using our watermarking scheme, an n-bit watermark can be embedded in a program having n independent functions. The scheme is implemented on several sample codes and performance changes are analyzed.
Resumo:
Since the late 1980s there have been increasing calls around the world for embedding sustainability content throughout engineering curricula, particularly over the past decade. However in general there has been little by way of strategic or systematic integration within programs offered by higher education institutions(HEIs). Responding to a growing awareness towards the issues surrounding sustainability, a number of professional engineering institutions (PEIs) internationally have placed increasing emphasis on policies and initiatives relating to the role of engineering in addressing 21st Century challenges. This has resulted in some consideration towards integrating sustainable development into engineering curricula as envisaged by accreditation guidelines. This paper provides a global overview of such accreditation developments, highlighting emerging sustainability competencies (or ‘graduate attributes’) and places these in the context of relevant PEI declarations, initiatives, policies, codes of ethics and guideline publications. The paper concludes by calling for urgent action by PEIs, including strategic accreditation initiatives that promote timely curriculum renewal towards EESD.
Resumo:
There is a growing recognition of the interests and rights of individuals conceived using donated gametes in assisted reproductive technology to information about their biological parentage. In Australia these rights vary between jurisdictions according to differing statutory provisions. In February 2011 the Senate's Legal and Constitutional Affairs References Committee published its report on Donor Conception Practices in Australia. The report recommended the development of a nationally consistent approach to donor conception and recommended the enactment of legislation in those Australian jurisdictions without legislation regulating donor conception. This editorial reviews the Senate Committee report and its recommendations and supports calls for a nationally harmonised approach to donor conception in Australia.
Resumo:
Hunter argues that cognitive science models of human thinking explain how analogical reasoning and precedential reasoning operate in law. He offers an explanation of why various legal theories are so limited and calls for greater attention to what is actually happening when lawyers and judges reason, by analogy, with precedent.
Resumo:
The ineffectiveness of current design processes has been well studied and has resulted in widespread calls for the evolution and development of new management processes. Perhaps one problem is that with the advent of BIM we are moving from one stage to another without necessarily having resolved all the issues. CAD design technology, if well handled, could have significantly raised the level of quality and efficiency of current processes, but in practice this was not fully realized. Therefore, technology alone can´t solve all the problems and the advent of BIM could result in a similar bottleneck. For a precise definition of the problem to be solved we should start by understanding what are the main current bottlenecks that have yet to be overcome by either new technologies or management processes, and the impact of human behavior related issues despite the advent of new technologies. The fragmented and dispersed nature of the AEC sector and the huge number of small organizations that comprise it would probably be a major limiting factor. Several authors have addressed this issue and more recently IDDS has been defined as the highest level of achievement. However, what is written on IDDS shows an extremely ideal situation on a state to be achieved; it shows a holistic utopian proposition with the intent to create the research agenda to move towards that state. Key to IDDS is the framing of a new management model which should address the problems associated with key aspects: technology, processes, policies and people. One of the primary areas to be further studied is the process of collaborative work and understanding, together with the development of proposals to overcome the many cultural barriers that currently exist and impede the advance of new management methods. The purpose of this paper is to define and delimit problems to be solved so that it is possible to implement a new management model for a collaborative design process.
Resumo:
The ineffectiveness of current design processes has been well studied and has resulted in widespread calls for the evolution and development of new management processes. Even following the advent of BIM, we continue to move from one stage to another without necessarily having resolved all the issues. CAD design technology, if well handled, could have significantly raised the level of quality and efficiency of current processes, but in practice this was not fully realized. Therefore, technology alone can´t solve all the problems and the advent of BIM could result in a similar bottleneck. For a precise definition of the problem to be solved we should start by understanding what are the main current bottlenecks that have yet to be overcome by either new technologies or management processes, and the impact of human behaviour-related issues which impact the adoption and utilization of new technologies. The fragmented and dispersed nature of the AEC sector, and the huge number of small organizations that comprise it, are a major limiting factor. Several authors have addressed this issue and more recently IDDS has been defined as the highest level of achievement. However, what is written on IDDS shows an extremely ideal situation on a state to be achieved; it shows a holistic utopian proposition with the intent to create the research agenda to move towards that state. Key to IDDS is the framing of a new management model which should address the problems associated with key aspects: technology, processes, policies and people. One of the primary areas to be further studied is the process of collaborative work and understanding, together with the development of proposals to overcome the many cultural barriers that currently exist and impede the advance of new management methods. The purpose of this paper is to define and delimit problems to be solved so that it is possible to implement a new management model for a collaborative design process.
Resumo:
Introduction Epithelial-to-mesenchymal transition (EMT) promotes cell migration and is important in metastasis. Cellular proliferation is often downregulated during EMT, and the reverse transition (MET) in metastases appears to be required for restoration of proliferation in secondary tumors. We studied the interplay between EMT and proliferation control by MYB in breast cancer cells. Methods MYB, ZEB1, and CDH1 expression levels were manipulated by lentiviral small-hairpin RNA (shRNA)-mediated knockdown/overexpression, and verified with Western blotting, immunocytochemistry, and qRT-PCR. Proliferation was assessed with bromodeoxyuridine pulse labeling and flow cytometry, and sulforhodamine B assays. EMT was induced with epidermal growth factor for 9 days or by exposure to hypoxia (1% oxygen) for up to 5 days, and assessed with qRT-PCR, cell morphology, and colony morphology. Protein expression in human breast cancers was assessed with immunohistochemistry. ZEB1-MYB promoter binding and repression were determined with Chromatin Immunoprecipitation Assay and a luciferase reporter assay, respectively. Student paired t tests, Mann–Whitney, and repeated measures two-way ANOVA tests determined statistical significance (P < 0.05). Results Parental PMC42-ET cells displayed higher expression of ZEB1 and lower expression of MYB than did the PMC42-LA epithelial variant. Knockdown of ZEB1 in PMC42-ET and MDA-MB-231 cells caused increased expression of MYB and a transition to a more epithelial phenotype, which in PMC42-ET cells was coupled with increased proliferation. Indeed, we observed an inverse relation between MYB and ZEB1 expression in two in vitro EMT cell models, in matched human breast tumors and lymph node metastases, and in human breast cancer cell lines. Knockdown of MYB in PMC42-LA cells (MYBsh-LA) led to morphologic changes and protein expression consistent with an EMT. ZEB1 expression was raised in MYBsh-LA cells and significantly repressed in MYB-overexpressing MDA-MB-231 cells, which also showed reduced random migration and a shift from mesenchymal to epithelial colony morphology in two dimensional monolayer cultures. Finally, we detected binding of ZEB1 to MYB promoter in PMC42-ET cells, and ZEB1 overexpression repressed MYB promoter activity. Conclusions This work identifies ZEB1 as a transcriptional repressor of MYB and suggests a reciprocal MYB-ZEB1 repressive relation, providing a mechanism through which proliferation and the epithelial phenotype may be coordinately modulated in breast cancer cells.
Resumo:
Background A population-based, cross-sectional telephone survey was conducted to estimate the penetrance and characteristics of contact lens wear in Australia. Methods Based on postcode distribution, 42,749 households around Australia were randomly selected from the national electronic telephone directory. During calls, the number of individuals and contact lens wearers in each household aged between 15 and 64 years was ascertained. Contact lens wearers were interviewed using a structured questionnaire, to determine details of demographics, lens type, mode of lens wear and hygienic habits. Contact lens wear characteristics and habits were compared by lens type and mode of use. Results Of the 32,405 households contacted, 19,171 (59.2 per cent) agreed to participate. The penetrance of contact lens wear during the study period was 5.01 per cent (95% CI: 4.78-5.24). The mean age of lens wearers was 36.5 ± 18.3 years and 63.4 per cent were female. There were significant differences in the habits and characteristics of lens wearers depending on their lens type and mode of use. Conclusions The penetrance of contact lens wear concurs with market estimates and equates to approximately 680,000 contact lens wearers aged between 15 and 64 years in Australia. This is the most detailed and extensive population-based survey of contact lens wearers ever conducted. The discrepancies found between the characteristics of lens wearers surveyed in this study compared to those in previous studies of contact lens practitioners highlights the importance of study design. These results may be applied to other regions with similar health-care and regulatory systems.
Resumo:
Faunal vocalisations are vital indicators for environmental change and faunal vocalisation analysis can provide information for answering ecological questions. Therefore, automated species recognition in environmental recordings has become a critical research area. This thesis presents an automated species recognition approach named Timed and Probabilistic Automata. A small lexicon for describing animal calls is defined, six algorithms for acoustic component detection are developed, and a series of species recognisers are built and evaluated.The presented automated species recognition approach yields significant improvement on the analysis performance over a real world dataset, and may be transferred to commercial software in the future.
Resumo:
Introduction The consistency of measuring small field output factors is greatly increased by reporting the measured dosimetric field size of each factor, as opposed to simply stating the nominal field size [1] and therefore requires the measurement of cross-axis profiles in a water tank. However, this makes output factor measurements time consuming. This project establishes at which field size the accuracy of output factors are not affected by the use of potentially inaccurate nominal field sizes, which we believe establishes a practical working definition of a ‘small’ field. The physical components of the radiation beam that contribute to the rapid change in output factor at small field sizes are examined in detail. The physical interaction that dominates the cause of the rapid dose reduction is quantified, and leads to the establishment of a theoretical definition of a ‘small’ field. Methods Current recommendations suggest that radiation collimation systems and isocentre defining lasers should both be calibrated to permit a maximum positioning uncertainty of 1 mm [2]. The proposed practical definition for small field sizes is as follows: if the output factor changes by ±1.0 % given a change in either field size or detector position of up to ±1 mm then the field should be considered small. Monte Carlo modelling was used to simulate output factors of a 6 MV photon beam for square fields with side lengths from 4.0 to 20.0 mm in 1.0 mm increments. The dose was scored to a 0.5 mm wide and 2.0 mm deep cylindrical volume of water within a cubic water phantom, at a depth of 5 cm and SSD of 95 cm. The maximum difference due to a collimator error of ±1 mm was found by comparing the output factors of adjacent field sizes. The output factor simulations were repeated 1 mm off-axis to quantify the effect of detector misalignment. Further simulations separated the total output factor into collimator scatter factor and phantom scatter factor. The collimator scatter factor was further separated into primary source occlusion effects and ‘traditional’ effects (a combination of flattening filter and jaw scatter etc.). The phantom scatter was separated in photon scatter and electronic disequilibrium. Each of these factors was plotted as a function of field size in order to quantify how each affected the change in small field size. Results The use of our practical definition resulted in field sizes of 15 mm or less being characterised as ‘small’. The change in field size had a greater effect than that of detector misalignment. For field sizes of 12 mm or less, electronic disequilibrium was found to cause the largest change in dose to the central axis (d = 5 cm). Source occlusion also caused a large change in output factor for field sizes less than 8 mm. Discussion and conclusions The measurement of cross-axis profiles are only required for output factor measurements for field sizes of 15 mm or less (for a 6 MV beam on Varian iX linear accelerator). This is expected to be dependent on linear accelerator spot size and photon energy. While some electronic disequilibrium was shown to occur at field sizes as large as 30 mm (the ‘traditional’ definition of small field [3]), it has been shown that it does not cause a greater change than photon scatter until a field size of 12 mm, at which point it becomes by far the most dominant effect.
Resumo:
Introduction Due to their high spatial resolution diodes are often used for small field relative output factor measurements. However, a field size specific correction factor [1] is required and corrects for diode detector over-response at small field sizes. A recent Monte Carlo based study has shown that it is possible to design a diode detector that produces measured relative output factors that are equivalent to those in water. This is accomplished by introducing an air gap at the upstream end of the diode [2]. The aim of this study was to physically construct this diode by placing an ‘air cap’ on the end of a commercially available diode (the PTW 60016 electron diode). The output factors subsequently measured with the new diode design were compared to current benchmark small field output factor measurements. Methods A water-tight ‘cap’ was constructed so that it could be placed over the upstream end of the diode. The cap was able to be offset from the end of the diode, thus creating an air gap. The air gap width was the same as the diode width (7 mm) and the thickness of the air gap could be varied. Output factor measurements were made using square field sizes of side length from 5 to 50 mm, using a 6 MV photon beam. The set of output factor measurements were repeated with the air gap thickness set to 0, 0.5, 1.0 and 1.5 mm. The optimal air gap thickness was found in a similar manner to that proposed by Charles et al. [2]. An IBA stereotactic field diode, corrected using Monte Carlo calculated kq,clin,kq,msr values [3] was used as the gold standard. Results The optimal air thickness required for the PTW 60016 electron diode was 1.0 mm. This was close to the Monte Carlo predicted value of 1.15 mm2. The sensitivity of the new diode design was independent of field size (kq,clin,kq,msr = 1.000 at all field sizes) to within 1 %. Discussion and conclusions The work of Charles et al. [2] has been proven experimentally. An existing commercial diode has been converted into a correction-less small field diode by the simple addition of an ‘air cap’. The method of applying a cap to create the new diode leads to the diode being dual purpose, as without the cap it is still an unmodified electron diode.
Resumo:
OBJECTIVE To assess the concurrent validity of fasting indexes of insulin sensitivity and secretion in - obese prepubertal (Tanner stage 1) children and pubertal (Tanner stages 2-5) glucose tolerance test (FSIVGTT) as a criterion measure. RESEARCH DESIGN AND METHODS Eighteen obese children and adolescents (11 girls and 7 boys, mean age 12.2 +/- 2.4 years, mean BMI 35.4 +/- 6.2 kg/m(2), mean BMI-SDS 3.5 +/- 0.5, 7 prepubertal and I I pubertal) participated in the study. All participants underwent an insulin-modified FSIVGTT on two occasions, and 15 repeated this test a third time (mean 12.9 and 12.0 weeks apart). S-i measured by the FSIVGTT was compared with homeostasis model assessment (HOMA) of insulin resistance (HOMA-IR), quantitative insulin-sensitivity check index (QUICKI), fasting glucose-to-insulin ratio (FGIR), and fasting insulin (estimates of insulin sensitivity derived from fasting samples). The acute insulin response (AIR) measured by the FSIVGTT was compared with HOMA of percent beta-cell function (HOMA-beta%), FGIR, and fasting insulin (estimates of insulin secretion derived from fasting samples). RESULTS There was a significant negative correlation between HOMA-IR and S-i (r = -0.89, r = -0.90, and r = -0.81, P < 0.01) and a significant positive correlation between QUICKI and S-i (r = 0.89, r = 0.90, and r = 0.81, P < 0.01) at each time point. There was a significant positive correlation between FGIR and S-i (r = 0.91, r = 0.91, and r = 0.82, P < 0.01) and a significant negative correlation between fasting insulin and S-i (r = -90, r = -0.90, and r = -0.88, P < 0.01). HOMA-beta% was not as strongly correlated with AIR (r = 0.60, r = 0.54, and r = 0.61, P < 0.05). CONCLUSIONS HOMA-IR, QUICKI, FGIR, and fasting insulin correlate strongly with S-i assessed by the FSIVGTT in obese children and adolescents. Correlations between HOMA-β% FGIR and fasting insulin, and AIR were not as strong. Indexes derived from fasting samples are a valid tool for assessing insulin sensitivity in prepubertal and pubertal obese children.
Resumo:
The aim of this study was to investigate adolescents' potential reactivity and tampering while wearing pedometers by comparing different monitoring protocols to accelerometer output. The sample included adolescents (N=123, age range=14-15 years) from three secondary schools in New South Wales, Australia. Schools were randomised to one of the three pedometer monitoring protocols: (i) daily sealed (DS) pedometer group, (ii) unsealed (US) pedometer group or (iii) weekly sealed (WS) pedometer group. Participants wore pedometers (Yamax Digi-Walker CW700, Yamax Corporation, Kumamoto City, Japan) and accelerometers (Actigraph GT3X+, Pensacola, USA) simultaneously for seven days. Repeated measures analysis of variance was used to examine potential reactivity. Bivariate correlations between step counts and accelerometer output were calculated to explore potential tampering. The correlation between accelerometer output and pedometer steps/day was strongest among participants in the WS group (r=0.82, P <= 0.001), compared to the US (r=0.63, P <= 0.001) and DS (r=0.16, P=0.324) groups. The DS (P <= 0.001) and US (P=0.003), but not the WS (P=0.891), groups showed evidence of reactivity. The results suggest that reactivity and tampering does occur in adolescents and contrary to existing research, pedometer monitoring protocols may influence participant behaviour.