368 resultados para Specification searching


Relevância:

10.00% 10.00%

Publicador:

Resumo:

This research investigates home literacy education practices of Taiwanese families in Australia. As Taiwanese immigrants represent the largest ¡°Chinese Australian¡± subgroup to have settled in the state of Queensland, teachers in this state often face the challenges of cultural differences between Australian schools and Taiwanese homes. Extensive work by previous researchers suggests that understanding the cultural and linguistic differences that influence how an immigrant child views and interacts with his/her environment is a possible way to minimise the challenges. Cultural practices start from infancy and at home. Therefore, this study is focused on young children who are around the age of four to five. It is a study that examines the form of literacy education that is enacted and valued by Taiwanese parents in Australia. Specifically, this study analyses ¡°what literacy knowledge and skill is taught at home?¡±, ¡°how is it taught?¡± and ¡°why is it taught?¡± The study is framed in Pierre Bourdieu.s theory of social practice that defines literacy from a sociological perspective. The aim is to understand the practices through which literacy is taught in the Taiwanese homes. Practices of literacy education are culturally embedded. Accordingly, the study shows the culturally specialised ways of learning and knowing that are enacted in the study homes. The study entailed four case studies that draw on: observations and recording of the interactions between the study parent and child in their literacy events; interviews and dialogues with the parents involved; and a collection of photographs of the children.s linguistic resources and artefacts. The methodological arguments and design addressed the complexity of home literacy education where Taiwanese parents raise children in their own cultural ways while adapting to a new country in an immigrant context. In other words, the methodology not only involves cultural practices, but also involves change and continuity in home literacy practices. Bernstein.s theory of pedagogic discourse was used to undertake a detailed analysis of parents. selection and organisation of content for home literacy education, and the evaluative criteria they established for the selected literacy knowledge and skill. This analysis showed how parents selected and controlled the interactions in their child.s literacy learning. Bernstein.s theory of pedagogic discourse was used also to analyse change and continuity in home literacy practice, specifically, the concepts of ¡°classification¡± and ¡°framing¡±. The design of this study aimed to gain an understanding of parents. literacy teaching in an immigrant context. The study found that parents tended to value and enact traditional practices, yet most of the parents were also searching for innovative ideas for their adult-structured learning. Home literacy education of Taiwanese families in this study was found to be complex, multi-faceted and influenced in an ongoing way by external factors. Implications for educators and recommendations for future study are provided. The findings of this study offer early childhood teachers in Australia understandings that will help them build knowledge about home literacy education of Taiwanese Australian families.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Kernel-based learning algorithms work by embedding the data into a Euclidean space, and then searching for linear relations among the embedded data points. The embedding is performed implicitly, by specifying the inner products between each pair of points in the embedding space. This information is contained in the so-called kernel matrix, a symmetric and positive semidefinite matrix that encodes the relative positions of all points. Specifying this matrix amounts to specifying the geometry of the embedding space and inducing a notion of similarity in the input space - classical model selection problems in machine learning. In this paper we show how the kernel matrix can be learned from data via semidefinite programming (SDP) techniques. When applied to a kernel matrix associated with both training and test data this gives a powerful transductive algorithm -using the labeled part of the data one can learn an embedding also for the unlabeled part. The similarity between test points is inferred from training points and their labels. Importantly, these learning problems are convex, so we obtain a method for learning both the model class and the function without local minima. Furthermore, this approach leads directly to a convex method for learning the 2-norm soft margin parameter in support vector machines, solving an important open problem.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents a group maintenance scheduling case study for a water distributed network. This water pipeline network presents the challenge of maintaining aging pipelines with the associated increases in annual maintenance costs. The case study focuses on developing an effective maintenance plan for the water utility. Current replacement planning is difficult as it needs to balance the replacement needs under limited budgets. A Maintenance Grouping Optimization (MGO) model based on a modified genetic algorithm was utilized to develop an optimum group maintenance schedule over a 20-year cycle. The adjacent geographical distribution of pipelines was used as a grouping criterion to control the searching space of the MGO model through a Judgment Matrix. Based on the optimum group maintenance schedule, the total cost was effectively reduced compared with the schedules without grouping maintenance jobs. This optimum result can be used as a guidance to optimize the current maintenance plan for the water utility.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Kernel-based learning algorithms work by embedding the data into a Euclidean space, and then searching for linear relations among the embedded data points. The embedding is performed implicitly, by specifying the inner products between each pair of points in the embedding space. This information is contained in the so-called kernel matrix, a symmetric and positive definite matrix that encodes the relative positions of all points. Specifying this matrix amounts to specifying the geometry of the embedding space and inducing a notion of similarity in the input space -- classical model selection problems in machine learning. In this paper we show how the kernel matrix can be learned from data via semi-definite programming (SDP) techniques. When applied to a kernel matrix associated with both training and test data this gives a powerful transductive algorithm -- using the labelled part of the data one can learn an embedding also for the unlabelled part. The similarity between test points is inferred from training points and their labels. Importantly, these learning problems are convex, so we obtain a method for learning both the model class and the function without local minima. Furthermore, this approach leads directly to a convex method to learn the 2-norm soft margin parameter in support vector machines, solving another important open problem. Finally, the novel approach presented in the paper is supported by positive empirical results.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Railway timetabling is an important process in train service provision as it matches the transportation demand with the infrastructure capacity while customer satisfaction is also considered. It is a multi-objective optimisation problem, in which a feasible solution, rather than the optimal one, is usually taken in practice because of the time constraint. The quality of services may suffer as a result. In a railway open market, timetabling usually involves rounds of negotiations among a number of self-interested and independent stakeholders and hence additional objectives and constraints are imposed on the timetabling problem. While the requirements of all stakeholders are taken into consideration simultaneously, the computation demand is inevitably immense. Intelligent solution-searching techniques provide a possible solution. This paper attempts to employ a particle swarm optimisation (PSO) approach to devise a railway timetable in an open market. The suitability and performance of PSO are studied on a multi-agent-based railway open-market negotiation simulation platform.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper draws on a major study the authors conducted for the Australian Government in 2009. It focuses on the diffusion issues surrounding the uptake of sustainable building and construction products in Australia. Innovative sustainable products can minimise the environmental impact during construction, while maximising asset performance, durability and re-use. However, there are significant challenges faced by designers and clients in the selection of appropriate sustainable products in consideration of the integrated design solution, including overall energy efficiency, water conservation, maintenance and durability, low-impact use and consumption. The paper is a review of the current state of sustainable energy and material product innovations in Australia. It examines the system dynamics surrounding these innovations as well as the drivers and obstacles to their diffusion throughout the Australian construction industry. The case product types reviewed comprise: solar energy technology, small wind turbines, advanced concrete technology, and warm-mixed asphalt. The conclusions highlight the important role played by Australian governments in facilitating improved adoption rates. This applies to governments in their various roles, but particularly as clients/owners, regulators, and investors in education, training, research and development. In their role as clients/owners, the paper suggests that government can better facilitate innovation within the construction industry by adjusting specification policies to encourage the uptake of sustainable products. In the role as regulators, findings suggest governments should be encouraging the application of innovative finance options and positive end-user incentives to promote sustainable product uptake. Also, further education for project-based firms and the client/end users about the long-term financial and environmental benefits of innovative sustainable products is required. As more of the economy’s resources are diverted away from business-as-usual and into the use of sustainable products, some project-based firms may face short-term financial pain in re-shaping their businesses. Government policy initiatives can encourage firms make the necessary adjustments to improve innovative sustainable product diffusion throughout the industry.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Transmission smart grids will use a digital platform for the automation of high voltage substations. The IEC 61850 series of standards, released in parts over the last ten years, provide a specification for substation communications networks and systems. These standards, along with IEEE Std 1588-2008 Precision Time Protocol version 2 (PTPv2) for precision timing, are recommended by the both IEC Smart Grid Strategy Group and the NIST Framework and Roadmap for Smart Grid Interoperability Standards for substation automation. IEC 61850, PTPv2 and Ethernet are three complementary protocol families that together define the future of sampled value digital process connections for smart substation automation. A time synchronisation system is required for a sampled value process bus, however the details are not defined in IEC 61850-9-2. PTPv2 provides the greatest accuracy of network based time transfer systems, with timing errors of less than 100 ns achievable. The suitability of PTPv2 to synchronise sampling in a digital process bus is evaluated, with preliminary results indicating that steady state performance of low cost clocks is an acceptable ±300 ns, but that corrections issued by grandmaster clocks can introduce significant transients. Extremely stable grandmaster oscillators are required to ensure any corrections are sufficiently small that time synchronising performance is not degraded.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This is a practice-led project consisting of a historical novel Abduction and related exegesis. The novel is a third person intimate narrative set in the mid-nineteenth century and is based on actual events and persons caught up in, or furthering, the mass dispossession of small farmers in Scotland known as the ‘Clearances’. The narrative focuses on the situation in the Outer Hebrides and northern Scotland. It is based on documented facts leading up to a controversial trial in 1850 that arose because a twenty year old woman of the period (the central protagonist, Jess Mackenzie) eloped with a young farmer to escape her parent’s pressure to marry a rival suitor, himself a powerful lawyer and ‘factor’ at the centre of many of the Clearances. The young woman’s independent ideas were ahead of her time, and the decisions she made under great pressure were crucial in some dramatic events that unfolded in Scotland and later in the colony of Victoria, to which she and her new husband emigrated soon after the trial. The exegesis is composed of two unequal parts. It briefly considers the development of the literary historical fiction genre in the nineteenth century with Walter Scott in particular, a genre found useful in representing women’s issues of the Victorian era by Victorian and contemporary authors. The exegesis also briefly considers the appropriateness of the fiction genre (as opposed to creative nonfiction) in creating the lived experience in a fact-based work. The major part of the exegesis is a detailed, reflective analysis of the problem-solving process involved in writing the novel, structured by reference to Kate Grenville’s Searching for the Secret River – a work of metawriting that explains her creative process in researching and writing historical fiction based on fact.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Web has become a worldwide repository of information which individuals, companies, and organizations utilize to solve or address various information problems. Many of these Web users utilize automated agents to gather this information for them. Some assume that this approach represents a more sophisticated method of searching. However, there is little research investigating how Web agents search for online information. In this research, we first provide a classification for information agent using stages of information gathering, gathering approaches, and agent architecture. We then examine an implementation of one of the resulting classifications in detail, investigating how agents search for information on Web search engines, including the session, query, term, duration and frequency of interactions. For this temporal study, we analyzed three data sets of queries and page views from agents interacting with the Excite and AltaVista search engines from 1997 to 2002, examining approximately 900,000 queries submitted by over 3,000 agents. Findings include: (1) agent sessions are extremely interactive, with sometimes hundreds of interactions per second (2) agent queries are comparable to human searchers, with little use of query operators, (3) Web agents are searching for a relatively limited variety of information, wherein only 18% of the terms used are unique, and (4) the duration of agent-Web search engine interaction typically spans several hours. We discuss the implications for Web information agents and search engines.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Purpose: Web search engines are frequently used by people to locate information on the Internet. However, not all queries have an informational goal. Instead of information, some people may be looking for specific web sites or may wish to conduct transactions with web services. This paper aims to focus on automatically classifying the different user intents behind web queries. Design/methodology/approach: For the research reported in this paper, 130,000 web search engine queries are categorized as informational, navigational, or transactional using a k-means clustering approach based on a variety of query traits. Findings: The research findings show that more than 75 percent of web queries (clustered into eight classifications) are informational in nature, with about 12 percent each for navigational and transactional. Results also show that web queries fall into eight clusters, six primarily informational, and one each of primarily transactional and navigational. Research limitations/implications: This study provides an important contribution to web search literature because it provides information about the goals of searchers and a method for automatically classifying the intents of the user queries. Automatic classification of user intent can lead to improved web search engines by tailoring results to specific user needs. Practical implications: The paper discusses how web search engines can use automatically classified user queries to provide more targeted and relevant results in web searching by implementing a real time classification method as presented in this research. Originality/value: This research investigates a new application of a method for automatically classifying the intent of user queries. There has been limited research to date on automatically classifying the user intent of web queries, even though the pay-off for web search engines can be quite beneficial. © Emerald Group Publishing Limited.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper reports results from a study exploring the multimedia search functionality of Chinese language search engines. Web searching in Chinese (Mandarin) is a growing research area and a technical challenge for popular commercial Web search engines. Few studies have been conducted on Chinese language search engines. We investigate two research questions: which Chinese language search engines provide multimedia searching, and what multimedia search functionalities are available in Chinese language Web search engines. Specifically, we examine each Web search engine's (1) features permitting Chinese language multimedia searches, (2) extent of search personalization and user control of multimedia search variables, and (3) the relationships between Web search engines and their features in the Chinese context. Key findings show that Chinese language Web search engines offer limited multimedia search functionality, and general search engines provide a wider range of features than specialized multimedia search engines. Study results have implications for Chinese Web users, Website designers and Web search engine developers. © 2009 Elsevier Ltd. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Purpose - The web is now a significant component of the recruitment and job search process. However, very little is known about how companies and job seekers use the web, and the ultimate effectiveness of this process. The specific research questions guiding this study are: how do people search for job-related information on the web? How effective are these searches? And how likely are job seekers to find an appropriate job posting or application? Design/methodology/approach - The data used to examine these questions come from job seekers submitting job-related queries to a major web search engine at three points in time over a five-year period. Findings - Results indicate that individuals seeking job information generally submit only one query with several terms and over 45 percent of job-seeking queries contain a specific location reference. Of the documents retrieved, findings suggest that only 52 percent are relevant and only 40 percent of job-specific searches retrieve job postings. Research limitations/implications - This study provides an important contribution to web research and online recruiting literature. The data come from actual web searches, providing a realistic glimpse into how job seekers are actually using the web. Practical implications - The results of this research can assist organizations in seeking to use the web as part of their recruiting efforts, in designing corporate recruiting web sites, and in developing web systems to support job seeking and recruiting. Originality/value - This research is one of the first studies to investigate job searching on the web using longitudinal real world data. © Emerald Group Publishing Limited.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Metasearch engines are an intuitive method for improving the performance of Web search by increasing coverage, returning large numbers of results with a focus on relevance, and presenting alternative views of information needs. However, the use of metasearch engines in an operational environment is not well understood. In this study, we investigate the usage of Dogpile.com, a major Web metasearch engine, with the aim of discovering how Web searchers interact with metasearch engines. We report results examining 2,465,145 interactions from 534,507 users of Dogpile.com on May 6, 2005 and compare these results with findings from other Web searching studies. We collect data on geographical location of searchers, use of system feedback, content selection, sessions, queries, and term usage. Findings show that Dogpile.com searchers are mainly from the USA (84% of searchers), use about 3 terms per query (mean = 2.85), implement system feedback moderately (8.4% of users), and generally (56% of users) spend less than one minute interacting with the Web search engine. Overall, metasearchers seem to have higher degrees of interaction than searchers on non-metasearch engines, but their sessions are for a shorter period of time. These aspects of metasearching may be what define the differences from other forms of Web searching. We discuss the implications of our findings in relation to metasearch for Web searchers, search engines, and content providers.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Detecting query reformulations within a session by a Web searcher is an important area of research for designing more helpful searching systems and targeting content to particular users. Methods explored by other researchers include both qualitative (i.e., the use of human judges to manually analyze query patterns on usually small samples) and nondeterministic algorithms, typically using large amounts of training data to predict query modification during sessions. In this article, we explore three alternative methods for detection of session boundaries. All three methods are computationally straightforward and therefore easily implemented for detection of session changes. We examine 2,465,145 interactions from 534,507 users of Dogpile.com on May 6, 2005. We compare session analysis using (a) Internet Protocol address and cookie; (b) Internet Protocol address, cookie, and a temporal limit on intrasession interactions; and (c) Internet Protocol address, cookie, and query reformulation patterns. Overall, our analysis shows that defining sessions by query reformulation along with Internet Protocol address and cookie provides the best measure, resulting in an 82% increase in the count of sessions. Regardless of the method used, the mean session length was fewer than three queries, and the mean session duration was less than 30 min. Searchers most often modified their query by changing query terms (nearly 23% of all query modifications) rather than adding or deleting terms. Implications are that for measuring searching traffic, unique sessions may be a better indicator than the common metric of unique visitors. This research also sheds light on the more complex aspects of Web searching involving query modifications and may lead to advances in searching tools.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

[Quality Management in Construction Projects by Abdul Razzak Rumane, CRC Press, Boca Raton, FL, 2011, 434 pp, ISBN 9781439838716] Issues of quality management, quality control and performance against specification have long been the focus of various business sectors. Recently there has been an additional drive to achieve the continuous improvement and customer satisfaction promised by the 20th-century ‘gurus’ some six or seven decades ago. The engineering and construction industries have generally taken somewhat longer than their counterparts in the manufacturing, service and production sectors to achieve these espoused levels of quality. The construction and engineering sectors stand to realize major rewards from better managing quality in projects. More effort is being put into instructing future participants in the industry as well as assisting existing professionals. This book comes at an opportune time.