381 resultados para Run
Resumo:
Reviews into teacher education emphasise the need for preservice teachers to have more school-based experiences. In this study, a school-based experience was organised within a nine-week science curriculum university unit that allowed preservice teachers’ repeated experiences in teaching primary science. This research uses a survey, questionnaire with extended written responses, and researcher observations to investigate preservice teachers’ (n=38) learning experiences in two school settings. Survey results indicated that the majority of these preservice teachers either agreed or strongly agreed that school-based experiences developed their: personal-professional skill development (100%); system requirements (range: 81-100%); teaching practices (81-100%); student behaviour management (range: 94-100%); providing student feedback (89-94%); and reflection on practice (92-100%). Qualitative data provided insights into their development particularly for science content knowledge and receiving positive reinforcement on effective teaching behaviours. According to these preservice teachers, the school-based experiences facilitated “teachable moments – having the knowledge or skills to run with students’ questions or ideas” and allowed preservice teachers to “critically reflect between groups to make the task flow better”. Embedding school-based experiences needs to be part of each and every preservice teacher education unit so preservice teachers can develop confidence, knowledge and skills within authentic school contexts.
Resumo:
The XML Document Mining track was launched for exploring two main ideas: (1) identifying key problems and new challenges of the emerging field of mining semi-structured documents, and (2) studying and assessing the potential of Machine Learning (ML) techniques for dealing with generic ML tasks in the structured domain, i.e., classification and clustering of semi-structured documents. This track has run for six editions during INEX 2005, 2006, 2007, 2008, 2009 and 2010. The first five editions have been summarized in previous editions and we focus here on the 2010 edition. INEX 2010 included two tasks in the XML Mining track: (1) unsupervised clustering task and (2) semi-supervised classification task where documents are organized in a graph. The clustering task requires the participants to group the documents into clusters without any knowledge of category labels using an unsupervised learning algorithm. On the other hand, the classification task requires the participants to label the documents in the dataset into known categories using a supervised learning algorithm and a training set. This report gives the details of clustering and classification tasks.
Resumo:
Mainstream representations of trans people typically run the gamut from victim to mentally ill and are almost always articulated by non-trans voices. The era of user-generated digital content and participatory culture has heralded unprecedented opportunities for trans people who wish to speak their own stories in public spaces. Digital Storytelling, as an easy accessible autobiographic audio-visual form, offers scope to play with multi-dimensional and ambiguous representations of identity that contest mainstream assumptions of what it is to be ‘male’ or ‘female’. Also, unlike mainstream media forms, online and viral distribution of Digital Stories offer potential to reach a wide range of audiences, which is appealing to activist oriented storytellers who wish to confront social prejudices. However, with these newfound possibilities come concerns regarding visibility and privacy, especially for storytellers who are all too aware of the risks of being ‘out’ as trans. This paper explores these issues from the perspective of three trans storytellers, with reference to the Digital Stories they have created and shared online and on DVD. These examplars are contextualised with some popular and scholarly perspectives on trans representation, in particular embodied and performed identity. It is contended that trans Digital Stories, while appearing in some ways to be quite conventional, actually challenge common notions of gender identity in ways that are both radical and transformative.
Resumo:
Traditionally, transport disadvantage has been identified using accessibility analysis although the effectiveness of the accessibility planning approach to improving access to goods and services is not known. This paper undertakes a comparative assessment of measures of mobility, accessibility, and participation used to identify transport disadvantage using the concept of activity spaces. A 7 day activity-travel diary data for 89 individuals was collected from two case study areas located in rural Northern Ireland. A spatial analysis was conducted to select the case study areas using criteria derived from the literature. The criteria are related to the levels of area accessibility and area mobility which are known to influence the nature of transport disadvantage. Using the activity-travel diary data individuals weekly as well as day to day variations in activity-travel patterns were visualised. A model was developed using the ArcGIS ModelBuilder tool and was run to derive scores related to individual levels of mobility, accessibility, and participation in activities from the geovisualisation. Using these scores a multiple regression analysis was conducted to identify patterns of transport disadvantage. This study found a positive association between mobility and accessibility, between mobility and participation, and between accessibility and participation in activities. However, area accessibility and area mobility were found to have little impact on individual mobility, accessibility, and participation in activities. Income vis-àvis ´ car-ownership was found to have a significant impact on individual levels of mobility, and accessibility; whereas participation in activities were found to be a function of individual levels of income and their occupational status.
Resumo:
The economic environment of today can be characterized as highly dynamic and competitive if not being in a constant flux. Globalization and the Information Technology (IT) revolution are perhaps the main contributing factors to this observation. While companies have to some extent adapted to the current business environment, new pressures such as the recent increase in environmental awareness and its likely effects on regulations are underway. Hence, in the light of market and competitive pressures, companies must constantly evaluate and if necessary update their strategies to sustain and increase the value they create for shareholders (Hunt and Morgan, 1995; Christopher and Towill, 2002). One way to create greater value is to become more efficient in producing and delivering goods and services to customers, which can lead to a strategy known as cost leadership (Porter, 1980). Even though Porter (1996) notes that in the long run cost leadership may not be a sufficient strategy for competitive advantage, operational efficiency is certainly necessary and should therefore be on the agenda of every company. ----- ----- ----- Better workflow management, technology, and resource utilization can lead to greater internal operational efficiency, which explains why, for example, many companies have recently adopted Enterprise Resource Planning (ERP) Systems: integrated softwares that streamline business processes. However, as today more and more companies are approaching internal operational excellence, the focus for finding inefficiencies and cost saving opportunities is moving beyond the boundaries of the firm. Today many firms in the supply chain are engaging in collaborative relationships with customers, suppliers, and third parties (services) in an attempt to cut down on costs related to for example, inventory, production, as well as to facilitate synergies. Thus, recent years have witnessed fluidity and blurring regarding organizational boundaries (Coad and Cullen, 2006). ----- ----- ----- The Information Technology (IT) revolution of the late 1990’s has played an important role in bringing organizations closer together. In their efforts to become more efficient, companies first integrated their information systems to speed up transactions such as ordering and billing. Later collaboration on a multidimensional scale including logistics, production, and Research & Development became evident as companies expected substantial benefits from collaboration. However, one could also argue that the recent popularity of the concepts falling under Supply Chain Management (SCM) such as Vendor Managed Inventory, Collaborative Planning, Replenishment, and Forecasting owe to the marketing efforts of software vendors and consultants who provide these solutions. Nevertheless, reports from professional organizations as well as academia indicate that the trend towards interorganizational collaboration is gaining wider ground. For example, the ARC Advisory Group, a research organization on supply chain solutions, estimated that the market for SCM, which includes various kinds of collaboration tools and related services, is going to grow at an annual rate of 7.4% during the years 2004-2008, reaching to $7.4 billion in 2008 (Engineeringtalk 2004).
Resumo:
The Reporting and Reception of Indigenous Issues in the Australian Media was a three year project financed by the Australian government through its Australian Research Council Large Grants Scheme and run by Professor John Hartley (of Murdoch and then Edith Cowan University, Western Australia). The purpose of the research was to map the ways in which indigeneity was constructed and circulated in Australia's mediasphere. The analysis of the 'reporting' element of the project was almost straightforward: a mixture of content analysis of a large number of items in the media, and detailed textual analysis of a smaller number of key texts. The discoveries were interesting - that when analysis approaches the media as a whole, rather than focussing exclusively on news or serious drama genres, then representation of indigeneity is not nearly as homogenous as has previously been assumed. And if researchers do not explicitly set out to uncover racism in every text, it is by no means guaranteed they will find it1. The question of how to approach the 'reception' of these issues - and particularly reception by indigenous Australians - proved to be a far more challenging one. In attempting to research this area, Hartley and I (working as a research assistant on the project) often found ourselves hampered by the axioms that underlie much media research. Traditionally, the 'reception' of media by indigenous people in Australia has been researched in ethnographic ways. This research repeatedly discovers that indigenous people in Australia are powerless in the face of new forms of media. Indigenous populations are represented as victims of aggressive and powerful intrusions: ‘What happens when a remote community is suddenly inundated by broadcast TV?’; ‘Overnight they will go from having no radio and television to being bombarded by three TV channels’; ‘The influence of film in an isolated, traditionally oriented Aboriginal community’2. This language of ‘influence’, ‘bombarded’, and ‘inundated’, presents metaphors not just of war but of a war being lost. It tells of an unequal struggle, of a more powerful force impinging upon a weaker one. What else could be the relationship of an Aboriginal audience to something which is ‘bombarding’ them? Or by which they are ‘inundated’? This attitude might best be summed up by the title of an article by Elihu Katz: ‘Can authentic cultures survive new media?’3. In such writing, there is little sense that what is being addressed might be seen as a series of discursive encounters, negotiations and acts of meaning-making in which indigenous people — communities and audiences —might be productive. Certainly, the points of concern in this type of writing are important. The question of what happens when a new communication medium is summarily introduced to a culture is certainly an important one. But the language used to describe this interaction is a misleading one. And it is noticeable that such writing is fascinated with the relationship of only traditionally-oriented Aboriginal communities to the media of mass communication.
Resumo:
This paper proposes a novel approach for identifying risks in executable business processes and detecting them at run time. The approach considers risks in all phases of the business process management lifecycle, and is realized via a distributed, sensor-based architecture. At design-time, sensors are defined to specify risk conditions which when fulfilled, are a likely indicator of faults to occur. Both historical and current execution data can be used to compose such conditions. At run-time, each sensor independently notifies a sensor manager when a risk is detected. In turn, the sensor manager interacts with the monitoring component of a process automation suite to prompt the results to the user who may take remedial actions. The proposed architecture has been implemented in the YAWL system and its performance has been evaluated in practice.
Resumo:
Due to the limitation of current condition monitoring technologies, the estimates of asset health states may contain some uncertainties. A maintenance strategy ignoring this uncertainty of asset health state can cause additional costs or downtime. The partially observable Markov decision process (POMDP) is a commonly used approach to derive optimal maintenance strategies when asset health inspections are imperfect. However, existing applications of the POMDP to maintenance decision-making largely adopt the discrete time and state assumptions. The discrete-time assumption requires the health state transitions and maintenance activities only happen at discrete epochs, which cannot model the failure time accurately and is not cost-effective. The discrete health state assumption, on the other hand, may not be elaborate enough to improve the effectiveness of maintenance. To address these limitations, this paper proposes a continuous state partially observable semi-Markov decision process (POSMDP). An algorithm that combines the Monte Carlo-based density projection method and the policy iteration is developed to solve the POSMDP. Different types of maintenance activities (i.e., inspections, replacement, and imperfect maintenance) are considered in this paper. The next maintenance action and the corresponding waiting durations are optimized jointly to minimize the long-run expected cost per unit time and availability. The result of simulation studies shows that the proposed maintenance optimization approach is more cost-effective than maintenance strategies derived by another two approximate methods, when regular inspection intervals are adopted. The simulation study also shows that the maintenance cost can be further reduced by developing maintenance strategies with state-dependent maintenance intervals using the POSMDP. In addition, during the simulation studies the proposed POSMDP shows the ability to adopt a cost-effective strategy structure when multiple types of maintenance activities are involved.
Resumo:
Abstract As regional and continental carbon balances of terrestrial ecosystems become available, it becomes clear that the soils are the largest source of uncertainty. Repeated inventories of soil organic carbon (SOC) organized in soil monitoring networks (SMN) are being implemented in a number of countries. This paper reviews the concepts and design of SMNs in ten countries, and discusses the contribution of such networks to reducing the uncertainty of soil carbon balances. Some SMNs are designed to estimate country-specific land use or management effects on SOC stocks, while others collect soil carbon and ancillary data to provide a nationally consistent assessment of soil carbon condition across the major land-use/soil type combinations. The former use a single sampling campaign of paired sites, while for the latter both systematic (usually grid based) and stratified repeated sampling campaigns (5–10 years interval) are used with densities of one site per 10–1,040 km². For paired sites, multiple samples at each site are taken in order to allow statistical analysis, while for the single sites, composite samples are taken. In both cases, fixed depth increments together with samples for bulk density and stone content are recommended. Samples should be archived to allow for re-measurement purposes using updated techniques. Information on land management, and where possible, land use history should be systematically recorded for each site. A case study of the agricultural frontier in Brazil is presented in which land use effect factors are calculated in order to quantify the CO2 fluxes from national land use/management conversion matrices. Process-based SOC models can be run for the individual points of the SMN, provided detailed land management records are available. These studies are still rare, as most SMNs have been implemented recently or are in progress. Examples from the USA and Belgium show that uncertainties in SOC change range from 1.6–6.5 Mg C ha−1 for the prediction of SOC stock changes on individual sites to 11.72 Mg C ha−1 or 34% of the median SOC change for soil/land use/climate units. For national SOC monitoring, stratified sampling sites appears to be the most straightforward attribution of SOC values to units with similar soil/land use/climate conditions (i.e. a spatially implicit upscaling approach). Keywords Soil monitoring networks - Soil organic carbon - Modeling - Sampling design
Resumo:
The Electrocardiogram (ECG) is an important bio-signal representing the sum total of millions of cardiac cell depolarization potentials. It contains important insight into the state of health and nature of the disease afflicting the heart. Heart rate variability (HRV) refers to the regulation of the sinoatrial node, the natural pacemaker of the heart by the sympathetic and parasympathetic branches of the autonomic nervous system. The HRV signal can be used as a base signal to observe the heart's functioning. These signals are non-linear and non-stationary in nature. So, higher order spectral (HOS) analysis, which is more suitable for non-linear systems and is robust to noise, was used. An automated intelligent system for the identification of cardiac health is very useful in healthcare technology. In this work, we have extracted seven features from the heart rate signals using HOS and fed them to a support vector machine (SVM) for classification. Our performance evaluation protocol uses 330 subjects consisting of five different kinds of cardiac disease conditions. We demonstrate a sensitivity of 90% for the classifier with a specificity of 87.93%. Our system is ready to run on larger data sets.
Resumo:
In Australia, trials conducted as 'electronic trials' have ordinarily run with the assistance of commercial service providers, with the associated costs being borne by the parties. However, an innovative approach has been taken by the courts in Queensland. In October 2007 Queensland became the first Australian jurisdiction to develop its own court-provided technology, to facilitate the conduct of an electronic trial. This technology was first used in the conduct of civil trials. The use of the technology in the civil sphere highlighted its benefits and, more significantly, demonstrated the potential to achieve much greater efficiencies. The Queensland courts have now gone further, using the court-provided technology in the high proffle criminal trial of R v Hargraves, Hargraves and Stoten, in which the three accused were tried for conspiracy to defraud the Commonwealth of Australia of about $3.7 million in tax. This paper explains the technology employed in this case and reports on the perspectives of all of the participants in the process. The representatives for all parties involved in this trial acknowledged, without reservation, that the use of the technology at trial produced considerable overall efficiencies and costs savings. The experience in this trial also demonstrates that the benefits of trial technology for the criminal justice process are greater than those for civil litigation. It shows that, when skilfully employed, trial technology presents opportunities to enhance the fairness of trials for accused persons. The paper urges governments, courts and the judiciary in all jurisdictions to continue their efforts to promote change, and to introduce mechanisms to facilitate more broadly a shift from the entrenched paper-based approach to both criminal and civil procedure to one which embraces more broadly the enormous benefits trial technology has to offer.
Resumo:
The number of children with special health care needs surviving infancy and attending school has been increasing. Due to their health status, these children may be at risk of low social-emotional and learning competencies (e.g., Lightfoot, Mukherjee, & Sloper, 2000; Zehnder, Landolt, Prchal, & Vollrath, 2006). Early social problems have been linked to low levels of academic achievement (Ladd, 2005), inappropriate behaviours at school (Shiu, 2001) and strained teacher-child relationships (Blumberg, Carle, O‘Connor, Moore, & Lippmann, 2008). Early learning difficulties have been associated with mental health problems (Maughan, Rowe, Loeber, & Stouthamer-Loeber, 2003), increased behaviour issues (Arnold, 1997), delinquency (Loeber & Dishion, 1983) and later academic failure (Epstein, 2008). Considering the importance of these areas, the limited research on special health care needs in social-emotional and learning domains is a factor driving this research. The purpose of the current research is to investigate social-emotional and learning competence in the early years for Australian children who have special health care needs. The data which informed this thesis was from Growing up in Australia: The Longitudinal Study of Australian Children. This is a national, longitudinal study being conducted by the Commonwealth Department of Families, Housing, Community Services and Indigenous Affairs. The study has a national representative sample, with data collection occurring biennially, in 2004 (Wave 1), 2006 (Wave 2) and 2008 (Wave 3). Growing up in Australia uses a cross-sequential research design involving two cohorts, an Infant Cohort (0-1 at recruitment) and a Kindergarten Cohort (4-5 at recruitment). This study uses the Kindergarten Cohort, for which there were 4,983 children at recruitment. Three studies were conducted to address the objectives of this thesis. Study 1 used Wave 1 data to identify and describe Australian children with special health care needs. Children who identified as having special health care needs through the special health care needs screener were selected. From this, descriptive analyses were run. The results indicate that boys, children with low birth weight and children from families with low levels of maternal education are likely to be in the population of children with special health care needs. Further, these children are likely to be using prescription medications, have poor general health and are likely to have specific condition diagnoses. Study 2 used Wave 1 data to examine differences between children with special health care needs and their peers in social-emotional competence and learning competence prior to school. Children identified by the special health care needs screener were chosen for the case group (n = 650). A matched case control group of peers (n = 650), matched on sex, cultural and linguistic diversity, family socioeconomic position and age, were the comparison group. Social-emotional competence was measured through Social/Emotional Domain scores taken from the Growing up in Australia Outcome Index, with learning competence measured through Learning Domain scores. Results suggest statistically significant differences in scores between the two groups. Children with special health care needs have lower levels of social-emotional and learning competence prior to school compared to their peers. Study 3 used Wave 1 and Wave 2 data to examine the relationship between special health care needs at Wave 1 and social-emotional competence and learning competence at Wave 2, as children started school. The sample for this study consisted of children in the Kindergarten Cohort who had teacher data at Wave 2. Results from multiple regression models indicate that special health care needs prior to school (Wave 1) significantly predicts social-emotional competence and learning competence in the early years of school (Wave 2). These results indicate that having special health care needs prior to school is a risk factor for the social-emotional and learning domains in the early years of school. The results from these studies give valuable insight into Australian children with special health care needs and their social-emotional and learning competence in the early years. The Australia population of children with special health care needs were primarily male children, from families with low maternal education, were likely to be of poor health and taking prescription medications. It was found that children with special health care needs were likely to have lower social-emotional competence and learning competence prior to school compared to their peers. Results indicate that special health care needs prior to school were predictive of lower social-emotional and learning competencies in the early years of school. More research is required into this unique population and their competencies over time. However, the current research provides valuable insight into an under researched 'at risk' population.
Resumo:
In team sports such as rugby union, a myriad of decisions and actions occur within the boundaries that compose the performance perceptual- motor workspace. The way that these performance boundaries constrain decision making and action has recently interested researchers and has involved developing an understanding of the concept of constraints. Considering team sports as complex dynamical systems, signifies that they are composed of multiple, independent agents (i.e. individual players) whose interactions are highly integrated. This level of complexity is characterized by the multiple ways that players in a rugby field can interact. It affords the emergence of rich patterns of behaviour, such as rucks, mauls, and collective tactical actions that emerge due to players’ adjustments to dynamically varying competition environments. During performance, the decisions and actions of each player are constrained by multiple causes (e.g. technical and tactical skills, emotional states, plans, thoughts, etc.) that generate multiple effects (e.g. to run or pass, to move forward to tackle or maintain position and drive the opponent to the line), a prime feature in a complex systems approach to team games performance (Bar- Yam, 2004). To establish a bridge between the complexity sciences and learning design in team sports like rugby union, the aim of practice sessions is to prepare players to pick up and explore the information available in the multiple constraints (i.e. the causes) that influence performance. Therefore, learning design in training sessions should be soundly based on the interactions amongst players (i.e.teammates and opponents) that will occur in rugby matches. To improve individual and collective decision making in rugby union, Passos and colleagues proposed in previous work a performer- environment interaction- based approach rather than a traditional performer- based approach (Passos, Araújo, Davids & Shuttleworth, 2008).
Resumo:
The uniformization method (also known as randomization) is a numerically stable algorithm for computing transient distributions of a continuous time Markov chain. When the solution is needed after a long run or when the convergence is slow, the uniformization method involves a large number of matrix-vector products. Despite this, the method remains very popular due to its ease of implementation and its reliability in many practical circumstances. Because calculating the matrix-vector product is the most time-consuming part of the method, overall efficiency in solving large-scale problems can be significantly enhanced if the matrix-vector product is made more economical. In this paper, we incorporate a new relaxation strategy into the uniformization method to compute the matrix-vector products only approximately. We analyze the error introduced by these inexact matrix-vector products and discuss strategies for refining the accuracy of the relaxation while reducing the execution cost. Numerical experiments drawn from computer systems and biological systems are given to show that significant computational savings are achieved in practical applications.