946 resultados para Optimal reactive dispatch problem


Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper demonstrates, following Vygotsky, that language and tool use has a critical role in the collaborative problem-solving behaviour of school-age children. It reports original ethnographic classroom research examining the convergence of speech and practical activity in children’s collaborative problem solving with robotics programming tasks. The researchers analysed children’s interactions during a series of problem solving experiments in which Lego Mindstorms toolsets were used by teachers to create robotics design challenges among 24 students in a Year 4 Australian classroom (students aged 8.5–9.5 years). The design challenges were incrementally difficult, beginning with basic programming of straight line movement, and progressing to more complex challenges involving programming of the robots to raise Lego figures from conduit pipes using robots as pulleys with string and recycled materials. Data collection involved micro-genetic analysis of students’ speech interactions with tools, peers, and other experts, teacher interviews, and student focus group data. Coding the repeated patterns in the transcripts, the authors outline the structure of the children’s social speech in joint problem solving, demonstrating the patterns of speech and interaction that play an important role in the socialisation of the school-age child’s practical intellect.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This investigation has demonstrated the need for thermal treatment of seawater neutralised red mud (SWRM) in order to obtain reasonable adsorption of Reactive Blue dye 19 (RB 19). Thermal treatment results in a greater surface area, which results in an increased adsorption capacity due to more available adsorption sites. Adsorption of RB 19 has been found to be best achieved in acidic conditions using SWNRM400 (heated to 400 �C) with an adsorption capacity of 416.7 mg/g compared to 250.0 mg/g for untreated SWNRM. Kinetic studies indicate a pseudosecond-order reaction mechanism is responsible for the adsorption of RB 19 using SWNRM, which indicates adsorption occurs by electrostatic interactions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Successive alkalinity producing systems (SAPSs) are widely used for treating acid mine drainage (AMD) and alleviating clogging commonly occurring in limestone systems due to an amorphous ferric precipitate. In this study, iron dust, bone char, micrite and their admixtures were used to treat arseniccontaining AMD. A particular interest was devoted to arsenic removal performance, mineralogical constraints on arsenic retention ability and permeability variation during column experiment for 140 days. The results showed that the sequence of the arsenic removal capacity was as follows: bone char > micrite > iron dust. The combination of 20% v/v iron dust and 80% v/v bone char/micrite columns can achieve better hydraulic conductivity and phosphorus-retention capacity than single micrite and bone char columns. The addition of iron dust created reductive environment and resulted in the transformation of coating material from colloidal phase to secondary mineral phase, such as green rust and phosphoerrite, which obviously ameliorates hydraulic conductivity of systems. The sequential extraction experiments indicated that the stable fractions of arsenic in columns were enhanced with help of iron dust compared to single bone char and micrite columns. A combination of iron dust and micrite/bone char represented a potential SAPS for treating As-containing AMD.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper is devoted to the analysis of career paths and employability. The state-of-the-art on this topic is rather poor in methodologies. Some authors propose distances well adapted to the data, but are limiting their analysis to hierarchical clustering. Other authors apply sophisticated methods, but only after paying the price of transforming the categorical data into continuous, via a factorial analysis. The latter approach has an important drawback since it makes a linear assumption on the data. We propose a new methodology, inspired from biology and adapted to career paths, combining optimal matching and self-organizing maps. A complete study on real-life data will illustrate our proposal.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Classifier selection is a problem encountered by multi-biometric systems that aim to improve performance through fusion of decisions. A particular decision fusion architecture that combines multiple instances (n classifiers) and multiple samples (m attempts at each classifier) has been proposed in previous work to achieve controlled trade-off between false alarms and false rejects. Although analysis on text-dependent speaker verification has demonstrated better performance for fusion of decisions with favourable dependence compared to statistically independent decisions, the performance is not always optimal. Given a pool of instances, best performance with this architecture is obtained for certain combination of instances. Heuristic rules and diversity measures have been commonly used for classifier selection but it is shown that optimal performance is achieved for the `best combination performance' rule. As the search complexity for this rule increases exponentially with the addition of classifiers, a measure - the sequential error ratio (SER) - is proposed in this work that is specifically adapted to the characteristics of sequential fusion architecture. The proposed measure can be used to select a classifier that is most likely to produce a correct decision at each stage. Error rates for fusion of text-dependent HMM based speaker models using SER are compared with other classifier selection methodologies. SER is shown to achieve near optimal performance for sequential fusion of multiple instances with or without the use of multiple samples. The methodology applies to multiple speech utterances for telephone or internet based access control and to other systems such as multiple finger print and multiple handwriting sample based identity verification systems.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Plug-in electric vehicles (PEVs) are increasingly popular in the global trend of energy saving and environmental protection. However, the uncoordinated charging of numerous PEVs can produce significant negative impacts on the secure and economic operation of the power system concerned. In this context, a hierarchical decomposition approach is presented to coordinate the charging/discharging behaviors of PEVs. The major objective of the upper-level model is to minimize the total cost of system operation by jointly dispatching generators and electric vehicle aggregators (EVAs). On the other hand, the lower-level model aims at strictly following the dispatching instructions from the upper-level decision-maker by designing appropriate charging/discharging strategies for each individual PEV in a specified dispatching period. Two highly efficient commercial solvers, namely AMPL/IPOPT and AMPL/CPLEX, respectively, are used to solve the developed hierarchical decomposition model. Finally, a modified IEEE 118-bus testing system including 6 EVAs is employed to demonstrate the performance of the developed model and method.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A modified Delphi approach has been applied in this study to investigate best practice and to determine the factors that contribute to optimal selection of projects. There are various standards and practices that some may recognise as representing best practice in this area. Many of these have similar characteristics and this study has found no single best practice. The study identified the factors that contribute to the optimal selection of projects as: culture, process, knowledge of the business, knowledge of the work, education, experience, governance, risk awareness, selection of players, preconceptions, and time pressures. All these factors were found to be significant; to be appropriate to public sector organisations, private sector organisations and government owned corporations; and to have a strong linkage to research on strategic decision making. These factors can be consolidated into two underlying factors of organisation culture and leadership.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the real world there are many problems in network of networks (NoNs) that can be abstracted to a so-called minimum interconnection cut problem, which is fundamentally different from those classical minimum cut problems in graph theory. Thus, it is desirable to propose an efficient and effective algorithm for the minimum interconnection cut problem. In this paper we formulate the problem in graph theory, transform it into a multi-objective and multi-constraint combinatorial optimization problem, and propose a hybrid genetic algorithm (HGA) for the problem. The HGA is a penalty-based genetic algorithm (GA) that incorporates an effective heuristic procedure to locally optimize the individuals in the population of the GA. The HGA has been implemented and evaluated by experiments. Experimental results have shown that the HGA is effective and efficient.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Australian government is currently considering options for the rewrite and reform of the current provisions which apply to the taxation of trust income. This article provides a discussion of the current regime and the proposed reforms. It is suggested that a major revamp of taxation of trust income in Australia is problematic and a simpler approach may be to leave the law as is, with modification where necessary to address key issues as and when they arise.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Operational modal analysis (OMA) is prevalent in modal identifi cation of civil structures. It asks for response measurements of the underlying structure under ambient loads. A valid OMA method requires the excitation be white noise in time and space. Although there are numerous applications of OMA in the literature, few have investigated the statistical distribution of a measurement and the infl uence of such randomness to modal identifi cation. This research has attempted modifi ed kurtosis to evaluate the statistical distribution of raw measurement data. In addition, a windowing strategy employing this index has been proposed to select quality datasets. In order to demonstrate how the data selection strategy works, the ambient vibration measurements of a laboratory bridge model and a real cable-stayed bridge have been respectively considered. The analysis incorporated with frequency domain decomposition (FDD) as the target OMA approach for modal identifi cation. The modal identifi cation results using the data segments with different randomness have been compared. The discrepancy in FDD spectra of the results indicates that, in order to fulfi l the assumption of an OMA method, special care shall be taken in processing a long vibration measurement data. The proposed data selection strategy is easy-to-apply and verifi ed effective in modal analysis.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Tags or personal metadata for annotating web resources have been widely adopted in Web 2.0 sites. However, as tags are freely chosen by users, the vocabularies are diverse, ambiguous and sometimes only meaningful to individuals. Tag recommenders may assist users during tagging process. Its objective is to suggest relevant tags to use as well as to help consolidating vocabulary in the systems. In this paper we discuss our approach for providing personalized tag recommendation by making use of existing domain ontology generated from folksonomy. Specifically we evaluated the approach in sparse situation. The evaluation shows that the proposed ontology-based method has improved the accuracy of tag recommendation in this situation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Tag recommendation is a specific recommendation task for recommending metadata (tag) for a web resource (item) during user annotation process. In this context, sparsity problem refers to situation where tags need to be produced for items with few annotations or for user who tags few items. Most of the state of the art approaches in tag recommendation are rarely evaluated or perform poorly under this situation. This paper presents a combined method for mitigating sparsity problem in tag recommendation by mainly expanding and ranking candidate tags based on similar items’ tags and existing tag ontology. We evaluated the approach on two public social bookmarking datasets. The experiment results show better accuracy for recommendation in sparsity situation over several state of the art methods.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The addition of surface tension to the classical Stefan problem for melting a sphere causes the solution to blow up at a finite time before complete melting takes place. This singular behaviour is characterised by the speed of the solid-melt interface and the flux of heat at the interface both becoming unbounded in the blow-up limit. In this paper, we use numerical simulation for a particular energy-conserving one-phase version of the problem to show that kinetic undercooling regularises this blow-up, so that the model with both surface tension and kinetic undercooling has solutions that are regular right up to complete melting. By examining the regime in which the dimensionless kinetic undercooling parameter is small, our results demonstrate how physically realistic solutions to this Stefan problem are consistent with observations of abrupt melting of nanoscaled particles.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Distributed Wireless Smart Camera (DWSC) network is a special type of Wireless Sensor Network (WSN) that processes captured images in a distributed manner. While image processing on DWSCs sees a great potential for growth, with its applications possessing a vast practical application domain such as security surveillance and health care, it suffers from tremendous constraints. In addition to the limitations of conventional WSNs, image processing on DWSCs requires more computational power, bandwidth and energy that presents significant challenges for large scale deployments. This dissertation has developed a number of algorithms that are highly scalable, portable, energy efficient and performance efficient, with considerations of practical constraints imposed by the hardware and the nature of WSN. More specifically, these algorithms tackle the problems of multi-object tracking and localisation in distributed wireless smart camera net- works and optimal camera configuration determination. Addressing the first problem of multi-object tracking and localisation requires solving a large array of sub-problems. The sub-problems that are discussed in this dissertation are calibration of internal parameters, multi-camera calibration for localisation and object handover for tracking. These topics have been covered extensively in computer vision literatures, however new algorithms must be invented to accommodate the various constraints introduced and required by the DWSC platform. A technique has been developed for the automatic calibration of low-cost cameras which are assumed to be restricted in their freedom of movement to either pan or tilt movements. Camera internal parameters, including focal length, principal point, lens distortion parameter and the angle and axis of rotation, can be recovered from a minimum set of two images of the camera, provided that the axis of rotation between the two images goes through the camera's optical centre and is parallel to either the vertical (panning) or horizontal (tilting) axis of the image. For object localisation, a novel approach has been developed for the calibration of a network of non-overlapping DWSCs in terms of their ground plane homographies, which can then be used for localising objects. In the proposed approach, a robot travels through the camera network while updating its position in a global coordinate frame, which it broadcasts to the cameras. The cameras use this, along with the image plane location of the robot, to compute a mapping from their image planes to the global coordinate frame. This is combined with an occupancy map generated by the robot during the mapping process to localised objects moving within the network. In addition, to deal with the problem of object handover between DWSCs of non-overlapping fields of view, a highly-scalable, distributed protocol has been designed. Cameras that follow the proposed protocol transmit object descriptions to a selected set of neighbours that are determined using a predictive forwarding strategy. The received descriptions are then matched at the subsequent camera on the object's path using a probability maximisation process with locally generated descriptions. The second problem of camera placement emerges naturally when these pervasive devices are put into real use. The locations, orientations, lens types etc. of the cameras must be chosen in a way that the utility of the network is maximised (e.g. maximum coverage) while user requirements are met. To deal with this, a statistical formulation of the problem of determining optimal camera configurations has been introduced and a Trans-Dimensional Simulated Annealing (TDSA) algorithm has been proposed to effectively solve the problem.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The use of immobilised TiO2 for the purification of polluted water streams introduces the necessity to evaluate the effect of mechanisms such as the transport of pollutants from the bulk of the liquid to the catalyst surface and the transport phenomena inside the porous film. Experimental results of the effects of film thickness on the observed reaction rate for both liquid-side and support-side illumination are here compared with the predictions of a one-dimensional mathematical model of the porous photocatalytic slab. Good agreement was observed between the experimentally obtained photodegradation of phenol and its by-products, and the corresponding model predictions. The results have confirmed that an optimal catalyst thickness exists and, for the films employed here, is 5 μm. Furthermore, the modelling results have highlighted the fact that porosity, together with the intrinsic reaction kinetics are the parameters controlling the photocatalytic activity of the film. The former by influencing transport phenomena and light absorption characteristics, the latter by naturally dictating the rate of reaction.