956 resultados para Blocks bootstrap
Resumo:
A. Background and context 1. Education, particularly basic education (grades1-9), has been considered critical to promoting national economic growth and social well being1. Three factors that con-tribute to the above are: (i) Education increases human capital inherent in a labor force and thus increases productivity. It also increases capacity for working with others and builds community consensus to support national development. (ii) Education can in-crease the innovative capacity of a community to support social and economic growth—use of new technologies, products and services to promote growth and wellbeing. (iii) Education can facilitate knowledge transfer needed to understand the social and eco-nomic innovations and new processes, practices and values. Cognizant of the above benefits of education, the Millennium Development Goals (MDG) and the Education for All (EFA) declarations advocating universal basic education were formulated and ratified by UN member countries. 2. Achieving universal primary education (grade 6) may not be sufficient to maxim-ize the above noted socio-economic and cultural benefits. There is general consensus that basic literacy and numeracy up to grade 9 are essential foundational blocks for any good education system to support national development. Basic Education provides an educational achievement threshold that ensures the learning is retained. To achieve this, the donor partner led interventions and the UN declarations such as the MDG goals have sought universal access to basic education (grades 1-9). As many countries progress towards achieving the universal access targets, recent research evidence suggests that we need more than just access to basic education to impact on the na-tional development. Measuring basic education completion cycle, gross enrolment rate (GER) and participation rate etc., has to now include a focus on quality and relevance of the education2. 3. While the above research finding is generally accepted by the Government of In-donesia (GoI), unlike many other developing countries, Indonesia is geographically and linguistically complex and has the fourth largest education sector in the world. It has over 3000 islands, 17,000 ethnic groups and it takes as long as 7 hours to travel from east to west of the country and has multiple time differences. The education system has six years of primary education (grades 1-6), 3 years of junior secondary education (grades 7-9) and three years of senior secondary education (grades 10-12). Therefore, applying the findings of the above cited research in a country like Indonesia is a chal-lenge. Nevertheless, since the adoption of the National Education Law (2003)3 the GoI has made significant progress in improving access to and quality of basic education (grades 1-9). The 2011/12 national education statistics show the primary education (grades 1-6) completion rate was 99.3%, the net enrolment rate (NER) was 95.4% and the GER was 115.4%. This is a significant achievement considering the complexities faced within Indonesia. This increase in the primary education sub-sector, however, has not flowed onto the Junior Secondary School (JSS) education. The transition from pri-mary to JSS is still short of the GoI targets. In 2012, there were 146,826 primary schools feeding into 33,668 junior secondary schools. The transition rate from primary to secondary in 2011/12 was 78%. When considering district or sub-district level data the transition in poor districts could be less than the aggregated national rate. Poverty and lack of parents’ education, confounded by opportunity cost, are major obstacles to transitioning to JSS4. 4. Table 1 presents a summary of GoI initiatives to accelerate the transition to JSS. GoI, with assistance from the donor community, has built 2465 new regular JSS, mak-ing the total number of regular JSS 33,668. In addition, 57,825 new classrooms have been added to existing regular JSS. Also, in rural and remote areas 4136 Satu-Atap5 (SATAP) schools were built to increase access to JSS. These SATAP schools are the focus of this study as they provide education opportunities to the most marginalized, ru-ral, remote children who otherwise would not have access to JSS and consequently not complete basic education.
Resumo:
"Contemporary society is in the midst of the boundless generation and collection of data, data that is produced from almost any measurable act. Be it weather or transport data sets published by government agencies, or the individual and interpersonal data generated by our digital interactions; a server somewhere is collating. With the rise of this digital data phenomenon comes questions of comprehension, purpose, ownership and translation. Without mediation digital data is an immense abstract list of text and numbers and in this abstracted form data sets become detached from the circumstances of their creation. Artists and digital creatives are building works from these constantly evolving data sets to develop a discourse that investigates, appropriates, reveals and reflects upon the society and environment that generates this medium. Datascape presents a range of works that use data as building blocks to facilitate connections and understanding around a range of personal, social and worldly issues. The exhibition is concerned with creating an opportunity for experiential discovery through engaging with work from some of the world’s prominent creatives in this field of practice. Utilising three thematic lenses: Generative Currents, the Anti-Sublime and the Human Context, the works offer a variety of pathways to traverse the Datascape. Lubi Thomas and Rachael Parsons, QUT Creative Industries Precinct"
Resumo:
A travel article about Amsterdam. BY COINCIDENCE, I flew to Amsterdam a week after I'd read Ian McEwan's novel of the same name. Amsterdam is a modern take on the theme of duelling and, in many ways, he couldn't have chosen a more appropriate place for his title. This is a city that duels with itself. I flew in at dawn, traditionally the moment to test your abilities at 10 paces. The countryside below was dark but blocks of orange light pulsed in the fields...
Resumo:
Land-use regression (LUR) is a technique that can improve the accuracy of air pollution exposure assessment in epidemiological studies. Most LUR models are developed for single cities, which places limitations on their applicability to other locations. We sought to develop a model to predict nitrogen dioxide (NO2) concentrations with national coverage of Australia by using satellite observations of tropospheric NO2 columns combined with other predictor variables. We used a generalised estimating equation (GEE) model to predict annual and monthly average ambient NO2 concentrations measured by a national monitoring network from 2006 through 2011. The best annual model explained 81% of spatial variation in NO2 (absolute RMS error=1.4 ppb), while the best monthly model explained 76% (absolute RMS error=1.9 ppb). We applied our models to predict NO2 concentrations at the ~350,000 census mesh blocks across the country (a mesh block is the smallest spatial unit in the Australian census). National population-weighted average concentrations ranged from 7.3 ppb (2006) to 6.3 ppb (2011). We found that a simple approach using tropospheric NO2 column data yielded models with slightly better predictive ability than those produced using a more involved approach that required simulation of surface-to-column ratios. The models were capable of capturing within-urban variability in NO2, and offer the ability to estimate ambient NO2 concentrations at monthly and annual time scales across Australia from 2006–2011. We are making our model predictions freely available for research.
Resumo:
Background This study reviewed the clinical presentation, cytologic findings and the immunophenotype of 69 Merkel Cell Carcinoma (MCC) cases sampled by FNA. Methods Demographic and clinical data, the cytology findings and results of ancillary testing were reviewed. Results Median patient age was 78 years (37 – 104) with a 1:1.8 female to male ratio. The most common FNA sites sampled included lymph nodes in the neck, the axillary region, the inguinal region and the parotid gland. Most patients had a history of MCC (68%) &/or non-MCC malignancy (70%). The common cytologic pattern was a cellular smear with malignant cells arranged in a dispersed pattern with variable numbers of disorganised groups of cells. Cytoplasm was scant or absent and nuclei showed mild to moderate anisokaryosis, stippled chromatin, inconspicuous nucleoli and nuclear molding. Numerous apoptotic bodies were often present. Cell block samples (28 cases) were usually positive for cytokeratins in a perinuclear dot pattern, including 88% of cases with CK20 positivity. CD56 was the most sensitive (95%) neuroendocrine marker on cell blocks and was also positive with flow cytometry in 9 cases tested. Conclusions MCC is most commonly seen in FNA specimens from the head and neck of elderly patients, often with a history of previous skin lesions. Occasional cases present in younger patients and some may be mistaken for other round blue cell tumors, such as lymphoma. CD 56 may be a useful marker in cell block preparations and in flow cytometric analysis of MCC.
Resumo:
Objective. Leconotide (CVID, AM336, CNSB004) is an omega conopeptide similar to ziconotide, which blocks voltage sensitive calcium channels. However, unlike ziconotide, which must be administered intrathecally, leconotide can be given intravenously because it is less toxic. This study investigated the antihyperalgesic potency of leconotide given intravenously alone and in combinations with morphine-administered intraperitoneally, in a rat model of bone cancer pain. Design. Syngeneic rat prostate cancer cells AT3B-1 were injected into one tibia of male Wistar rats. The tumor expanded within the bone causing hyperalgesia to heat applied to the ipsilateral hind paw. Measurements were made of the maximum dose (MD) of morphine and leconotide given alone and in combinations that caused no effect in an open-field activity monitor, rotarod, and blood pressure and heart rate measurements. Paw withdrawal thresholds from noxious heat were measured. Dose response curves for morphine (0.312–5.0 mg/kg intraperitoneal) and leconotide (0.002–200 µg/kg intravenous) given alone were plotted and responses compared with those caused by morphine and leconotide in combinations. Results. Leconotide caused minimal antihyperalgesic effects when administered alone. Morphine given alone intraperitoneally caused dose-related antihyperalgesic effects (ED50 = 2.40 ± 1.24 mg/kg), which were increased by coadministration of leconotide 20 µg/kg (morphine ED50 = 0.16 ± 1.30 mg/kg); 0.2 µg/kg (morphine ED50 = 0.39 ± 1.27 mg/kg); and 0.02 µg/kg (morphine ED50 = 1.24 ± 1.30 mg/kg). Conclusions. Leconotide caused a significant increase in reversal by morphine of the bone cancer-induced hyperalgesia without increasing the side effect profile of either drug. Clinical Implication. Translation into clinical practice of the method of analgesia described here will improve the quantity and quality of analgesia in patients with bone metastases. The use of an ordinary parenteral route for administration of the calcium channel blocker (leconotide) at low dose opens up the technique to large numbers of patients who could not have an intrathecal catheter for drug administration. Furthermore, the potentiating synergistic effect with morphine on hyperalgesia without increased side effects will lead to greater analgesia with improved quality of life.
Resumo:
Introduction The professional doctorate is specifically designed for professionals investigating real-world problems and relevant issues for a profession, industry, and/or the community. The focus is scholarly research into professional practices. The research programme bridges academia and the professions, and offers doctoral candidates the opportunity to investigate issues relevant to their own practices and to apply these understandings to their professional contexts. The study on which this article is based sought to track the scholarly skill development of a cohort of professional doctoral students who commenced the course in January 2008 at an Australian university. Because they hold positions of responsibility and are time-poor, many doctoral students have difficulty transitioning from professional practitioner to researcher and scholar. The struggle many experience is in the development of a theoretical or conceptual standpoint for argumentation (Lesham, 2007; Weese et al., 1999). It was thought that the use of a scaffolded learning environment that drew upon a blended learning approach incorporating face to face intensive blocks and collaborative knowledge-building tools such as wikis would provide a data source for understanding the development of scholarly skills. Wikis, weblogs and similar social networking software have the potential to support communities to share, learn, create and collaborate. The development of a wiki page by each candidate in the 2008 cohort was encouraged to provide the participants and the teaching team members with textual indicators of progress. Learning tasks were scaffolded with the expectation that the candidates would complete these tasks via the wikis. The expectation was that cohort members would comment on each other’s work, together with the supervisor and/or teaching team member who was allocated to each candidate. The supervisor is responsible for supervising the candidate’s work through to submission of the thesis for examination and the teaching team member provides support to both the supervisor and the candidate through to confirmation. This paper reports on the learning journey of a cohort of doctoral students during the first seven months of their professional doctoral programme to determine if there had been any qualitative shifts in understandings, expectations and perceptions regarding their developing knowledge and skills. The paper is grounded in the literature pertaining to doctoral studies and examines the structure of the professional doctoral programme. Following this is a discussion of the qualitative study that helped to unearth key themes regarding the participants’ learning journey.
Resumo:
Biomolecules are chemical compounds found in living organisms which are the building blocks of life and perform important functions. Fluctuation from the normal concentration of these biomolecules in living system leads to several disorders. Thus the exact determination of them in human fluids is essential in the clinical point of view. High performance liquid chromatography, flow injection analysis, capillary electrophoresis, fluorimetry, spectrophotometry, electrochemical and chemiluminescence techniques were usually used for the determination of biologically important molecules. Among these techniques, electrochemical determination of biomolecules has several advantages over other methods viz., simplicity, selectivity and sensitivity. In the past two decades, electrodes modified with polymer films, self-assembled monolayers containing different functional groups and carbon paste have been used as electrochemical sensors. But in recent years, nanomaterials based electrochemical sensors play an important role in the improvement of public health because of its rapid detection, high sensitivity and specificity in clinical diagnostics. To date gold nanoparticles (AuNPs) have received arousing attention mainly due to their fascinating electronic and optical properties as a consequence of their reduced dimensions. These unique properties of AuNPs make them as an ideal candidate for the immobilization of enzymes for biosensing. Further, the electrochemical properties of AuNPs reveal that they exhibit interesting properties by enhancing the electrode conductivity, facilitating electron transfer and improving the detection limit of biomolecules. In this chapter, we summarized the different strategies used for the attachment of AuNPs on electrode surfaces and highlighted the electrochemical determination of glucose, ascorbic acid (AA), uric acid (UA) and dopamine derivatives using the AuNPs modified electrodes.
Resumo:
Purpose Age-related changes in motion sensitivity have been found to relate to reductions in various indices of driving performance and safety. The aim of this study was to investigate the basis of this relationship in terms of determining which aspects of motion perception are most relevant to driving. Methods Participants included 61 regular drivers (age range 22–87 years). Visual performance was measured binocularly. Measures included visual acuity, contrast sensitivity and motion sensitivity assessed using four different approaches: (1) threshold minimum drift rate for a drifting Gabor patch, (2) Dmin from a random dot display, (3) threshold coherence from a random dot display, and (4) threshold drift rate for a second-order (contrast modulated) sinusoidal grating. Participants then completed the Hazard Perception Test (HPT) in which they were required to identify moving hazards in videos of real driving scenes, and also a Direction of Heading task (DOH) in which they identified deviations from normal lane keeping in brief videos of driving filmed from the interior of a vehicle. Results In bivariate correlation analyses, all motion sensitivity measures significantly declined with age. Motion coherence thresholds, and minimum drift rate threshold for the first-order stimulus (Gabor patch) both significantly predicted HPT performance even after controlling for age, visual acuity and contrast sensitivity. Bootstrap mediation analysis showed that individual differences in DOH accuracy partly explained these relationships, where those individuals with poorer motion sensitivity on the coherence and Gabor tests showed decreased ability to perceive deviations in motion in the driving videos, which related in turn to their ability to detect the moving hazards. Conclusions The ability to detect subtle movements in the driving environment (as determined by the DOH task) may be an important contributor to effective hazard perception, and is associated with age, and an individuals' performance on tests of motion sensitivity. The locus of the processing deficits appears to lie in first-order, rather than second-order motion pathways.
Resumo:
Brisbane, the capital of Queensland, in South-East Queensland is situated on the Brisbane River, one of the largest rivers (and floodplains) on the east coast of Australia. The river defines the city and gives it its name. The river has been a natural place to accommodate some population growth for the city with high-density development that capitalises on the natural amenity, cycleways and a string of parks and the flatter land. The major floods of 2011 and the scare of 2013, has seen a more malevolent quality of the river and shift of thinking on its role within the city. The floods have made council, for the first time, acquire prime development sites near the river, with proposals for high density development and made them parks, at great cost. The pressure for population growth in Brisbane remains. 140,000 new dwellings are required by 2031. Brownfield sites are less plentiful and there is interest to rethink of some of the other strategic locations in the city away from the river on higher ground and steeper slopes. Some of these places are currently open spaces. Victoria Park Golf Course sits on a high ridge line and a very strategic part of the city just north of the city centre is one of the few remaining golf courses close to the centre of an Australian capital city. While it is a public course and a valuable community asset, it has been compromised by the recently completed northern busway with two bus stations constructed on its edges. It is bounded on the west and north-east by two major community facilities, the Queensland University of Technology (QUT) to the west and RBW Hospital at its northern end. In a city in need of urban consolidation, perhaps it is time to review the future of the golf course. This question has been investigated as a conjecture in the Master of Architecture program at the QUT. The project has been to re-imagine Victoria Park as a new city parkland and a place that makes an urban connection from the QUT to the hospital. This new urban precinct is be a medium to high-density transit oriented development that capitalises on the bus way stations and the proximity of the university and hospital. The precinct will frame/define/interact with the new major urban park for the city. A key question being addressed is how the design can embody and define principles of a subtropical urbanism. Students are identifying the appropriate street and block structure, density and built form to be accommodated on blocks that define and activate a rich sequence of streets and public spaces. The paper will present a critical overview of the project work that provides a lens to how future professionals may respond to these issue that will be the focus of their professional lives.
Resumo:
Critical stage in open-pit mining is to determine the optimal extraction sequence of blocks, which has significant impacts on mining profitability. In this paper, a more comprehensive block sequencing optimisation model is developed for the open-pit mines. In the model, material characteristics of blocks, grade control, excavator and block sequencing are investigated and integrated to maximise the short-term benefit of mining. Several case studies are modeled and solved by CPLEX MIP and CP engines. Numerical investigations are presented to illustrate and validate the proposed methodology.
Resumo:
This paper proposes a new multi-resource multi-stage scheduling problem for optimising the open-pit drilling, blasting and excavating operations under equipment capacity constraints. The flow process is analysed based on the real-life data from an Australian iron ore mine site. The objective of the model is to maximise the throughput and minimise the total idle times of equipment at each stage. The following comprehensive mining attributes and constraints have been considered: types of equipment; operating capacities of equipment; ready times of equipment; speeds of equipment; block-sequence-dependent movement times of equipment; equipment-assignment-dependent operation times of blocks; distances between each pair of blocks; due windows of blocks; material properties of blocks; swell factors of blocks; and slope requirements of blocks. It is formulated by mixed integer programming and solved by ILOG-CPLEX optimiser. The proposed model is validated with extensive computational experiments to improve mine production efficiency at the operational level. The model also provides an intelligent decision support tool to account for the availability and usage of equipment units for drilling, blasting and excavating stages.
Resumo:
This study was designed to identify the neural networks underlying automatic auditory deviance detection in 10 healthy subjects using functional magnetic resonance imaging. We measured blood oxygenation level-dependent contrasts derived from the comparison of blocks of stimuli presented as a series of standard tones (50 ms duration) alone versus blocks that contained rare duration-deviant tones (100 ms) that were interspersed among a series of frequent standard tones while subjects were watching a silent movie. Possible effects of scanner noise were assessed by a “no tone” condition. In line with previous positron emission tomography and EEG source modeling studies, we found temporal lobe and prefrontal cortical activation that was associated with auditory duration mismatch processing. Data were also analyzed employing an event-related hemodynamic response model, which confirmed activation in response to duration-deviant tones bilaterally in the superior temporal gyrus and prefrontally in the right inferior and middle frontal gyri. In line with previous electrophysiological reports, mismatch activation of these brain regions was significantly correlated with age. These findings suggest a close relationship of the event-related hemodynamic response pattern with the corresponding electrophysiological activity underlying the event-related “mismatch negativity” potential, a putative measure of auditory sensory memory.
Resumo:
Cryptographic hash functions are an important tool of cryptography and play a fundamental role in efficient and secure information processing. A hash function processes an arbitrary finite length input message to a fixed length output referred to as the hash value. As a security requirement, a hash value should not serve as an image for two distinct input messages and it should be difficult to find the input message from a given hash value. Secure hash functions serve data integrity, non-repudiation and authenticity of the source in conjunction with the digital signature schemes. Keyed hash functions, also called message authentication codes (MACs) serve data integrity and data origin authentication in the secret key setting. The building blocks of hash functions can be designed using block ciphers, modular arithmetic or from scratch. The design principles of the popular Merkle–Damgård construction are followed in almost all widely used standard hash functions such as MD5 and SHA-1.
Resumo:
This thesis aims at studying the structural behaviour of high bond strength masonry shear walls by developing a combined interface and surface contact model. The results are further verified by a cost-effective structural level model which was then extensively used for predicting all possible failure modes of high bond strength masonry shear walls. It is concluded that the increase in bond strength of masonry modifies the failure mode from diagonal cracking to base sliding and doesn't proportionally increase the in-plane shear capacity. This can be overcome by increasing pre-compression pressure which causes failure through blocks. A design equation is proposed and high bond strength masonry is recommended for taller buildings and/ or pre-stressed masonry applications.