869 resultados para surrogate data
Resumo:
To this point, the collection has provided research-based, empirical accounts of the various and multiple effects of the National Assessment Program – Literacy and Numeracy (NAPLAN) in Australian schooling as a specific example of the global phenomenon of national testing. In this chapter, we want to develop a more theoretical analysis of national testing systems, globalising education policy and the promise of national testing as adaptive, online tests. These future moves claim to provide faster feedback and more useful diagnostic help for teachers. There is a utopian testing dream that one day adaptive, online tests will be responsive in real time providing an integrated personalised testing, pedagogy and intervention for each student. The moves towards these next generation assessments are well advanced, including the work of Pearson’s NextGen Learning and Assessment research group, the Organization for Economic Co-operation and Development’s (OECD) move into assessing affective skills and the Australian Curriculum, Assessment and Reporting Authority’s (ACARA) decision to phase in NAPLAN as an online, adaptive test from 2017...
Resumo:
High-stakes testing is changing what it means to be a ‘good teacher’ in the contemporary school. This paper uses Deleuze and Guattari's ideas on the control society and dividuation in the context of National Assessment Program Literacy and Numeracy (NAPLAN) testing in Australia to suggest that the database generates new understandings of the ‘good teacher’. Media reports are used to look at how teachers are responding to the high-stakes database through manipulating the data. This article argues that manipulating the data is a regrettable, but logical, response to manifestations of teaching where only the data counts.
Resumo:
Fusing data from multiple sensing modalities, e.g. laser and radar, is a promising approach to achieve resilient perception in challenging environmental conditions. However, this may lead to \emph{catastrophic fusion} in the presence of inconsistent data, i.e. when the sensors do not detect the same target due to distinct attenuation properties. It is often difficult to discriminate consistent from inconsistent data across sensing modalities using local spatial information alone. In this paper we present a novel consistency test based on the log marginal likelihood of a Gaussian process model that evaluates data from range sensors in a relative manner. A new data point is deemed to be consistent if the model statistically improves as a result of its fusion. This approach avoids the need for absolute spatial distance threshold parameters as required by previous work. We report results from object reconstruction with both synthetic and experimental data that demonstrate an improvement in reconstruction quality, particularly in cases where data points are inconsistent yet spatially proximal.
Resumo:
Purpose The purpose of this research is to examine the concept of “potential quality” – that is, a company's tangible search qualities (such as the physical servicescape and virtual servicescape) – within the context of the real‐estate industry in the USA. Design/methodology/approach This qualitative study collects data by conducting personal in‐depth interviews with 34 respondents who had been recent buyers or renters of property. The data are then coded and themed to identify quality dimensions relevant to this industry. Findings The results indicate that a buyer's perception of the overall service quality of real‐estate service consists of two components: the interaction with a realtor (process quality); and the virtual servicescape, especially the firm's website design and content (potential quality). The study concludes that existing scales (such as SERVQUAL and RESERV) fail to capture the tangible component of service quality sufficiently in the real‐estate industry. Research limitations/implications The study uses data from only one industry (real estate) and from only one demographic segment (professionals in higher education). Practical implications Service providers of intangible, high‐contact services must appreciate the importance of the virtual servicescape as a surrogate quality indicator that can help to reduce information asymmetries and consumers' uncertainty with regard to initiating a business relationship. Real estate firms need to pay attention to the training of agents and the design and content of their e‐service systems. Originality/value This study integrates potential quality, process quality, and outcome quality in a comprehensive proposed model. In particular, the study identifies “potential quality” as a combination of the attributes of the virtual service environment and the physical service environment.
Resumo:
Resolving species relationships and confirming diagnostic morphological characters for insect clades that are highly plastic, and/or include morphologically cryptic species, is crucial for both academic and applied reasons. Within the true fly (Diptera) family Chironomidae, a most ubiquitous freshwater insect group, the genera CricotopusWulp, 1874 and ParatrichocladiusSantos-Abreu, 1918 have long been taxonomically confusing. Indeed, until recently the Australian fauna had been examined in just two unpublished theses: most species were known by informal manuscript names only, with no concept of relationships. Understanding species limits, and the associated ecology and evolution, is essential to address taxonomic sufficiency in biomonitoring surveys. Immature stages are collected routinely, but tolerance is generalized at the genus level, despite marked variation among species. Here, we explored this issue using a multilocus molecular phylogenetic approach, including the standard mitochondrial barcode region, and tested explicitly for phylogenetic signal in ecological tolerance of species. Additionally, we addressed biogeographical patterns by conducting Bayesian divergence time estimation. We sampled all but one of the now recognized Australian Cricotopus species and tested monophyly using representatives from other austral and Asian locations. Cricotopus is revealed as paraphyletic by the inclusion of a nested monophyletic Paratrichocladius, with in-group diversification beginning in the Eocene. Previous morphological species concepts are largely corroborated, but some additional cryptic diversity is revealed. No significant relationship was observed between the phylogenetic position of a species and its ecology, implying either that tolerance to deleterious environmental impacts is a convergent trait among many Cricotopus species or that sensitive and restricted taxa have diversified into more narrow niches from a widely tolerant ancestor.
Resumo:
Head and neck squamous cell carcinoma (HNSCC) is the sixth most common cancer with 650,000 new cases p/a worldwide. HNSCC causes high morbidity with a 5-year survival rate of less than 60%, which has not improved due to the lack of early detection (Bozec et al. Eur Arch Otorhinolaryngol. 2013;270: 2745–9). Metastatic disease remains one of the leading causes of death in HNSCC patients. This review article provides a comprehensive overview of literature over the past 5 years on the detection of circulating tumour cells (CTCs) in HNSCC; CTC biology and future perspectives. CTCs are a hallmark of invasive cancer cells and key to metastasis. CTCs can be used as surrogate markers of overall survival and progression-free survival. CTCs are currently used as prognostic factors for breast, prostate and colorectal cancers using the CellSearch® system. CTCs have been detected in HNSCC, however, these numbers depend on the technique applied, time of blood collection and the clinical stage of the patient. The impact of CTCs in HNSCC is not well understood, and thus, not in routine clinical practice. Validated detection technologies that are able to capture CTCs undergoing epithelial–mesenchymal transition are needed. This will aid in the capture of heterogeneous CTCs, which can be compiled as new targets for the current food and drug administration-cleared CellSearch® system. Recent studies on CTCs in HNSCC with the CellSearch® have shown variable data. Therefore, there is an immediate need for large clinical trials encompassing a suite of biomarkers capturing CTCs in HNSCC, before CTCs can be used as prognostic markers in HNSCC patient management.
Resumo:
National pride is both an important and understudied topic with respect to economic behaviour, hence this thesis investigates whether: 1) there is a "light" side of national pride through increased compliance, and a "dark" side linked to exclusion; 2) successful priming of national pride is linked to increased tax compliance; and 3) East German post-reunification outmigration is related to loyalty. The project comprises three related empirical studies, analysing evidence from a large, aggregated, international survey dataset; a tax compliance laboratory experiment combining psychological priming with measurement of heart rate variability; and data collected after the fall of the Berlin Wall (a situation approximating a natural experiment).
Resumo:
Since 2006, we have been conducting urban informatics research that we define as “the study, design, and practice of urban experiences across different urban contexts that are created by new opportunities of real-time, ubiquitous technology and the augmentation that mediates the physical and digital layers of people networks and urban infrastructures” [1]. Various new research initiatives under the label “urban informatics” have been started since then by universities (e.g., NYU’s Center for Urban Science and Progress) and industry (e.g., Arup, McKinsey) worldwide. Yet, many of these new initiatives are limited to what Townsend calls, “data-driven approaches to urban improvement” [2]. One of the key challenges is that any quantity of aggregated data does not easily translate directly into quality insights to better understand cities. In this talk, I will raise questions about the purpose of urban informatics research beyond data, and show examples of media architecture, participatory city making, and citizen activism. I argue for (1) broadening the disciplinary foundations that urban science approaches draw on; (2) maintaining a hybrid perspective that considers both the bird’s eye view as well as the citizen’s view, and; (3) employing design research to not be limited to just understanding, but to bring about actionable knowledge that will drive change for good.
Resumo:
Big data analysis in healthcare sector is still in its early stages when comparing with that of other business sectors due to numerous reasons. Accommodating the volume, velocity and variety of healthcare data Identifying platforms that examine data from multiple sources, such as clinical records, genomic data, financial systems, and administrative systems Electronic Health Record (EHR) is a key information resource for big data analysis and is also composed of varied co-created values. Successful integration and crossing of different subfields of healthcare data such as biomedical informatics and health informatics could lead to huge improvement for the end users of the health care system, i.e. the patients.
Resumo:
Huge amount of data are generated from a variety of information sources in healthcare while the data sources originate from a veracity of clinical information systems and corporate data warehouses. The data derived from the above data sources are used for analysis and trending purposes thus playing an influential role as a real time decision-making tool. The unstructured, narrative data provided by these data sources qualify as healthcare big-data and researchers argue that the application of big-data in healthcare might enable the accountability and efficiency.
Resumo:
Distributed systems are widely used for solving large-scale and data-intensive computing problems, including all-to-all comparison (ATAC) problems. However, when used for ATAC problems, existing computational frameworks such as Hadoop focus on load balancing for allocating comparison tasks, without careful consideration of data distribution and storage usage. While Hadoop-based solutions provide users with simplicity of implementation, their inherent MapReduce computing pattern does not match the ATAC pattern. This leads to load imbalances and poor data locality when Hadoop's data distribution strategy is used for ATAC problems. Here we present a data distribution strategy which considers data locality, load balancing and storage savings for ATAC computing problems in homogeneous distributed systems. A simulated annealing algorithm is developed for data distribution and task scheduling. Experimental results show a significant performance improvement for our approach over Hadoop-based solutions.
Resumo:
The increase in data center dependent services has made energy optimization of data centers one of the most exigent challenges in today's Information Age. The necessity of green and energy-efficient measures is very high for reducing carbon footprint and exorbitant energy costs. However, inefficient application management of data centers results in high energy consumption and low resource utilization efficiency. Unfortunately, in most cases, deploying an energy-efficient application management solution inevitably degrades the resource utilization efficiency of the data centers. To address this problem, a Penalty-based Genetic Algorithm (GA) is presented in this paper to solve a defined profile-based application assignment problem whilst maintaining a trade-off between the power consumption performance and resource utilization performance. Case studies show that the penalty-based GA is highly scalable and provides 16% to 32% better solutions than a greedy algorithm.
Resumo:
In the past few years, the virtual machine (VM) placement problem has been studied intensively and many algorithms for the VM placement problem have been proposed. However, those proposed VM placement algorithms have not been widely used in today's cloud data centers as they do not consider the migration cost from current VM placement to the new optimal VM placement. As a result, the gain from optimizing VM placement may be less than the loss of the migration cost from current VM placement to the new VM placement. To address this issue, this paper presents a penalty-based genetic algorithm (GA) for the VM placement problem that considers the migration cost in addition to the energy-consumption of the new VM placement and the total inter-VM traffic flow in the new VM placement. The GA has been implemented and evaluated by experiments, and the experimental results show that the GA outperforms two well known algorithms for the VM placement problem.
Resumo:
Although live VM migration has been intensively studied, the problem of live migration of multiple interdependent VMs has hardly been investigated. The most important problem in the live migration of multiple interdependent VMs is how to schedule VM migrations as the schedule will directly affect the total migration time and the total downtime of those VMs. Aiming at minimizing both the total migration time and the total downtime simultaneously, this paper presents a Strength Pareto Evolutionary Algorithm 2 (SPEA2) for the multi-VM migration scheduling problem. The SPEA2 has been evaluated by experiments, and the experimental results show that the SPEA2 can generate a set of VM migration schedules with a shorter total migration time and a shorter total downtime than an existing genetic algorithm, namely Random Key Genetic Algorithm (RKGA). This paper also studies the scalability of the SPEA2.
Resumo:
The concept of big data has already outperformed traditional data management efforts in almost all industries. Other instances it has succeeded in obtaining promising results that provide value from large-scale integration and analysis of heterogeneous data sources for example Genomic and proteomic information. Big data analytics have become increasingly important in describing the data sets and analytical techniques in software applications that are so large and complex due to its significant advantages including better business decisions, cost reduction and delivery of new product and services [1]. In a similar context, the health community has experienced not only more complex and large data content, but also information systems that contain a large number of data sources with interrelated and interconnected data attributes. That have resulted in challenging, and highly dynamic environments leading to creation of big data with its enumerate complexities, for instant sharing of information with the expected security requirements of stakeholders. When comparing big data analysis with other sectors, the health sector is still in its early stages. Key challenges include accommodating the volume, velocity and variety of healthcare data with the current deluge of exponential growth. Given the complexity of big data, it is understood that while data storage and accessibility are technically manageable, the implementation of Information Accountability measures to healthcare big data might be a practical solution in support of information security, privacy and traceability measures. Transparency is one important measure that can demonstrate integrity which is a vital factor in the healthcare service. Clarity about performance expectations is considered to be another Information Accountability measure which is necessary to avoid data ambiguity and controversy about interpretation and finally, liability [2]. According to current studies [3] Electronic Health Records (EHR) are key information resources for big data analysis and is also composed of varied co-created values [3]. Common healthcare information originates from and is used by different actors and groups that facilitate understanding of the relationship for other data sources. Consequently, healthcare services often serve as an integrated service bundle. Although a critical requirement in healthcare services and analytics, it is difficult to find a comprehensive set of guidelines to adopt EHR to fulfil the big data analysis requirements. Therefore as a remedy, this research work focus on a systematic approach containing comprehensive guidelines with the accurate data that must be provided to apply and evaluate big data analysis until the necessary decision making requirements are fulfilled to improve quality of healthcare services. Hence, we believe that this approach would subsequently improve quality of life.